Filter interviews by
Linear Regression is chosen for its simplicity, interpretability, and effectiveness in modeling linear relationships in data.
Linear relationship: Use when the relationship between independent and dependent variables is linear, e.g., predicting sales based on advertising spend.
Continuous outcome: Suitable for predicting continuous outcomes, like house prices based on features like size and location.
Interpretability...
Embeddings in vector databases represent data points as dense vectors for efficient similarity search and retrieval.
Embeddings convert categorical data into continuous vector space, enabling mathematical operations.
For example, words can be represented as vectors in Word2Vec, capturing semantic relationships.
Vector databases store these embeddings, allowing for fast nearest neighbor searches.
Applications include r...
ARIMA is a statistical model used for forecasting time series data by capturing trends and seasonality.
ARIMA stands for AutoRegressive Integrated Moving Average.
It combines three components: AR (AutoRegressive), I (Integrated), and MA (Moving Average).
AR component uses past values to predict future values.
I component involves differencing the data to make it stationary.
MA component models the error of the predicti...
Tokens are units of text processed by LLMs, with limits varying by model, affecting input/output length.
A token can be as short as one character or as long as one word (e.g., 'cat' is one token, 'chatGPT' is one token).
Common token limits for open-source LLMs range from 512 to 4096 tokens, depending on the architecture.
For example, GPT-2 has a limit of 1024 tokens, while GPT-3 can handle up to 4096 tokens.
Exceedin...
What people are saying about HSBC Group
Regression predicts continuous outcomes; time series analyzes data points over time for trends and patterns.
Regression focuses on relationships between variables (e.g., predicting house prices based on features).
Time series analyzes data collected at regular intervals (e.g., stock prices over time).
Regression can be used for static datasets, while time series requires temporal ordering.
In regression, predictors ca...
LSTMs effectively handle long-term dependencies, overcoming RNNs' vanishing gradient problem.
LSTMs use memory cells to store information over long sequences, unlike RNNs which forget earlier data.
They employ gates (input, output, forget) to control the flow of information, enhancing learning.
LSTMs are better suited for tasks like language modeling and time series prediction where context is crucial.
For example, in...
CNN is used for image recognition while MLP is used for general classification tasks.
CNN uses convolutional layers to extract features from images while MLP uses fully connected layers.
CNN is better suited for tasks that require spatial understanding like object detection while MLP is better for tabular data.
CNN has fewer parameters than MLP due to weight sharing in convolutional layers.
CNN can handle input of var...
Central Limit Theorem states that the sampling distribution of the sample mean approaches a normal distribution as the sample size increases.
The Central Limit Theorem is essential in statistics as it allows us to make inferences about a population based on a sample.
It states that regardless of the shape of the population distribution, the sampling distribution of the sample mean will be approximately normally dist...
Feature selection methods help in selecting the most relevant features for building predictive models.
Feature selection methods aim to reduce the number of input variables to only those that are most relevant.
Common methods include filter methods, wrapper methods, and embedded methods.
Examples include Recursive Feature Elimination (RFE), Principal Component Analysis (PCA), and Lasso regression.
I applied via Referral and was interviewed in Nov 2024. There were 2 interview rounds.
Embeddings in vector databases represent data points as dense vectors for efficient similarity search and retrieval.
Embeddings convert categorical data into continuous vector space, enabling mathematical operations.
For example, words can be represented as vectors in Word2Vec, capturing semantic relationships.
Vector databases store these embeddings, allowing for fast nearest neighbor searches.
Applications include recomm...
ARIMA is a statistical model used for forecasting time series data by capturing trends and seasonality.
ARIMA stands for AutoRegressive Integrated Moving Average.
It combines three components: AR (AutoRegressive), I (Integrated), and MA (Moving Average).
AR component uses past values to predict future values.
I component involves differencing the data to make it stationary.
MA component models the error of the prediction as...
Linear Regression is chosen for its simplicity, interpretability, and effectiveness in modeling linear relationships in data.
Linear relationship: Use when the relationship between independent and dependent variables is linear, e.g., predicting sales based on advertising spend.
Continuous outcome: Suitable for predicting continuous outcomes, like house prices based on features like size and location.
Interpretability: Pro...
Tokens are units of text processed by LLMs, with limits varying by model, affecting input/output length.
A token can be as short as one character or as long as one word (e.g., 'cat' is one token, 'chatGPT' is one token).
Common token limits for open-source LLMs range from 512 to 4096 tokens, depending on the architecture.
For example, GPT-2 has a limit of 1024 tokens, while GPT-3 can handle up to 4096 tokens.
Exceeding tok...
Regression predicts continuous outcomes; time series analyzes data points over time for trends and patterns.
Regression focuses on relationships between variables (e.g., predicting house prices based on features).
Time series analyzes data collected at regular intervals (e.g., stock prices over time).
Regression can be used for static datasets, while time series requires temporal ordering.
In regression, predictors can be ...
Central Limit Theorem states that the sampling distribution of the sample mean approaches a normal distribution as the sample size increases.
The Central Limit Theorem is essential in statistics as it allows us to make inferences about a population based on a sample.
It states that regardless of the shape of the population distribution, the sampling distribution of the sample mean will be approximately normally distribut...
I applied via Referral and was interviewed before May 2023. There was 1 interview round.
Feature selection methods help in selecting the most relevant features for building predictive models.
Feature selection methods aim to reduce the number of input variables to only those that are most relevant.
Common methods include filter methods, wrapper methods, and embedded methods.
Examples include Recursive Feature Elimination (RFE), Principal Component Analysis (PCA), and Lasso regression.
I applied via Approached by Company and was interviewed before Sep 2021. There were 3 interview rounds.
I applied via Recruitment Consulltant and was interviewed before Aug 2021. There was 1 interview round.
CNN is used for image recognition while MLP is used for general classification tasks.
CNN uses convolutional layers to extract features from images while MLP uses fully connected layers.
CNN is better suited for tasks that require spatial understanding like object detection while MLP is better for tabular data.
CNN has fewer parameters than MLP due to weight sharing in convolutional layers.
CNN can handle input of varying ...
I applied via Walk-in and was interviewed in Mar 2020. There was 1 interview round.
R square is a statistical measure that represents the proportion of the variance in the dependent variable explained by the independent variables.
R square is a value between 0 and 1, where 0 indicates that the independent variables do not explain any of the variance in the dependent variable, and 1 indicates that they explain all of it.
It is used to evaluate the goodness of fit of a regression model.
Adjusted R square t...
WOE (Weight of Evidence) and IV (Information Value) are metrics used for feature selection and assessing predictive power in models.
WOE transforms categorical variables into continuous variables, making them more suitable for modeling.
IV quantifies the predictive power of a feature by measuring the separation between the good and bad outcomes.
For example, if a feature has an IV of 0.3, it indicates strong predictive po...
Variable reducing techniques are methods used to identify and select the most relevant variables in a dataset.
Variable reducing techniques help in reducing the number of variables in a dataset.
These techniques aim to identify the most important variables that contribute significantly to the outcome.
Some common variable reducing techniques include feature selection, dimensionality reduction, and correlation analysis.
Fea...
The Wald test is used in logistic regression to check the significance of the variable.
The Wald test calculates the ratio of the estimated coefficient to its standard error.
It follows a chi-square distribution with one degree of freedom.
A small p-value indicates that the variable is significant.
For example, in Python, the statsmodels library provides the Wald test in the summary of a logistic regression model.
Multicollinearity in logistic regression can be checked using correlation matrix and variance inflation factor (VIF).
Calculate the correlation matrix of the independent variables and check for high correlation coefficients.
Calculate the VIF for each independent variable and check for values greater than 5 or 10.
Consider removing one of the highly correlated variables or variables with high VIF to address multicollinear...
Bagging and boosting are ensemble methods used in machine learning to improve model performance.
Bagging involves training multiple models on different subsets of the training data and then combining their predictions through averaging or voting.
Boosting involves iteratively training models on the same dataset, with each subsequent model focusing on the samples that were misclassified by the previous model.
Bagging reduc...
Logistic regression is a statistical method used to analyze and model the relationship between a binary dependent variable and one or more independent variables.
It is a type of regression analysis used for predicting the outcome of a categorical dependent variable based on one or more predictor variables.
It uses a logistic function to model the probability of the dependent variable taking a particular value.
It is commo...
Gini coefficient measures the inequality among values of a frequency distribution.
Gini coefficient ranges from 0 to 1, where 0 represents perfect equality and 1 represents perfect inequality.
It is commonly used to measure income inequality in a population.
A Gini coefficient of 0.4 or higher is considered to be a high level of inequality.
Gini coefficient can be calculated using the Lorenz curve, which plots the cumulati...
A chair is a piece of furniture used for sitting, while a cart is a vehicle used for transporting goods.
A chair typically has a backrest and armrests, while a cart does not.
A chair is designed for one person to sit on, while a cart can carry multiple items or people.
A chair is usually stationary, while a cart is mobile and can be pushed or pulled.
A chair is commonly found in homes, offices, and public spaces, while a c...
Outliers can be detected using statistical methods like box plots, z-score, and IQR. Treatment can be removal or transformation.
Use box plots to visualize outliers
Calculate z-score and remove data points with z-score greater than 3
Calculate IQR and remove data points outside 1.5*IQR
Transform data using log or square root to reduce the impact of outliers
Model Gini is a measure of statistical dispersion used to evaluate the performance of classification models.
Model Gini is calculated as twice the area between the ROC curve and the diagonal line (random model).
It ranges from 0 (worst model) to 1 (best model), with higher values indicating better model performance.
A Gini coefficient of 0.5 indicates a model that is no better than random guessing.
Commonly used in credit ...
XGBoost model is trained by specifying parameters, splitting data into training and validation sets, fitting the model, and tuning hyperparameters.
Specify parameters for XGBoost model such as learning rate, max depth, and number of trees
Split data into training and validation sets using train_test_split function
Fit the XGBoost model on training data using fit method
Tune hyperparameters using techniques like grid search...
Python coding question and ML question
I applied via LinkedIn and was interviewed in Jul 2024. There were 3 interview rounds.
Assignment on credit risk
based on 4 interview experiences
Difficulty level
Duration
based on 12 reviews
Rating in categories
Assistant Manager
2.8k
salaries
| ₹5.5 L/yr - ₹13.2 L/yr |
Manager
2.2k
salaries
| ₹14 L/yr - ₹24.1 L/yr |
Senior Software Engineer
1.8k
salaries
| ₹13.2 L/yr - ₹24 L/yr |
Assistant Vice President
1.7k
salaries
| ₹25 L/yr - ₹43 L/yr |
Software Engineer
1.5k
salaries
| ₹7.8 L/yr - ₹14 L/yr |
Wells Fargo
JPMorgan Chase & Co.
Cholamandalam Investment & Finance
Citicorp