i
TCS
Filter interviews by
ETL (Extract, Transform, Load) is a data integration process that moves data from sources to a data warehouse.
Extract: Gather data from various sources like databases, APIs, or flat files. Example: Pulling sales data from a CRM system.
Transform: Clean and convert data into a suitable format. Example: Normalizing date formats or aggregating sales data by region.
Load: Insert the transformed data into a target system...
Partitioning is dividing a large dataset into smaller, manageable parts. Coalescing is merging small partitions into larger ones.
Partitioning is useful for parallel processing and optimizing query performance.
Coalescing reduces the number of partitions and can improve query performance.
In Spark, partitioning can be done based on a specific column or by specifying the number of partitions.
Coalescing can be used to ...
Internal tables store data within Hive's warehouse directory while external tables store data outside of it.
Internal tables are managed by Hive and are deleted when the table is dropped
External tables are not managed by Hive and data is not deleted when the table is dropped
Internal tables are faster for querying as data is stored within Hive's warehouse directory
External tables are useful for sharing data between ...
Window function is a SQL function that performs a calculation across a set of rows that are related to the current row.
Window functions are used to calculate running totals, moving averages, and other calculations that depend on the order of rows.
They allow you to perform calculations on a subset of rows within a larger result set.
Examples of window functions include ROW_NUMBER, RANK, DENSE_RANK, and NTILE.
Window ...
What people are saying about TCS
Repartitioning and bucketing are techniques used in Apache Spark to optimize data processing.
Repartitioning is the process of redistributing data across partitions to optimize parallelism and improve performance.
Bucketing is a technique used to organize data into more manageable and efficient groups based on a specific column or set of columns.
Repartitioning and bucketing can be used together to further optimize d...
An anonymous function is a function without a name.
Also known as lambda functions or closures
Can be used as arguments to higher-order functions
Can be defined inline without a separate declaration
Example: lambda x: x**2 defines a function that squares its input
Snowflake has 4 types of stages: ingest, load, copy, and query.
Ingest stage is used to load data from external sources into Snowflake.
Load stage is used to load data from internal sources within Snowflake.
Copy stage is used to copy data from one table to another within Snowflake.
Query stage is used to execute SQL queries on Snowflake data.
Stages can be created and managed using SQL commands or the Snowflake web in...
Data quality issues can be dealt with by identifying the root cause, implementing data validation checks, and establishing data governance policies.
Identify the root cause of the data quality issue
Implement data validation checks to prevent future issues
Establish data governance policies to ensure data accuracy and consistency
Regularly monitor and audit data quality
Involve stakeholders in the data quality process
U...
Rank assigns unique ranks to rows, while dense_rank handles ties by assigning the same rank to tied rows. Left join includes all rows from the left table and matching rows from the right table, while left anti join includes only rows from the left table that do not have a match in the right table.
Rank assigns unique ranks to rows based on the specified order, while dense_rank handles ties by assigning the same ran...
Clustering is the process of grouping similar data points together. Pods are groups of one or more containers, while nodes are individual machines in a cluster.
Clustering is a technique used in machine learning to group similar data points together based on certain features or characteristics.
Pods in a cluster are groups of one or more containers that share resources and are scheduled together on the same node.
Nod...
I appeared for an interview in Apr 2025, where I was asked the following questions.
I applied via Walk-in
Rank assigns unique ranks to rows, while dense_rank handles ties by assigning the same rank to tied rows. Left join includes all rows from the left table and matching rows from the right table, while left anti join includes only rows from the left table that do not have a match in the right table.
Rank assigns unique ranks to rows based on the specified order, while dense_rank handles ties by assigning the same rank to ...
I applied via Recruitment Consulltant and was interviewed in Aug 2024. There were 2 interview rounds.
Focus of quantitative maths and aptitude a bit more
I applied via LinkedIn and was interviewed in Oct 2024. There was 1 interview round.
Reverse strings in a Python list
Use list comprehension to iterate through the list and reverse each string
Use the slice notation [::-1] to reverse each string
Example: strings = ['hello', 'world'], reversed_strings = [s[::-1] for s in strings]
To find the 2nd highest salary in SQL, use the 'SELECT' statement with 'ORDER BY' and 'LIMIT' clauses.
Use the 'SELECT' statement to retrieve the salary column from the table.
Use the 'ORDER BY' clause to sort the salaries in descending order.
Use the 'LIMIT' clause to limit the result to the second row.
I appeared for an interview in Sep 2024.
I applied via Approached by Company and was interviewed in Sep 2024. There was 1 interview round.
SCD 1 overwrites old data with new data, while SCD 2 keeps track of historical changes.
SCD 1 updates existing records with new data, losing historical information.
SCD 2 creates new records for each change, preserving historical data.
SCD 1 is simpler and faster, but can lead to data loss.
SCD 2 is more complex and slower, but maintains a full history of changes.
Corrupt record handling in Spark involves identifying and handling data that does not conform to expected formats.
Use DataFrameReader option("badRecordsPath", "path/to/bad/records") to save corrupt records to a separate location for further analysis.
Use DataFrame.na.drop() or DataFrame.na.fill() to handle corrupt records by dropping or filling missing values.
Implement custom logic to identify and handle corrupt records...
Object-oriented programming (OOP) is a programming paradigm based on the concept of objects, which can contain data in the form of fields and code in the form of procedures.
OOP focuses on creating objects that interact with each other to solve a problem
Key concepts include encapsulation, inheritance, polymorphism, and abstraction
Encapsulation involves bundling data and methods that operate on the data into a single uni...
Data engineer life cycle involves collecting, storing, processing, and analyzing data using various tools.
Data collection: Gathering data from various sources such as databases, APIs, and logs.
Data storage: Storing data in databases, data lakes, or data warehouses.
Data processing: Cleaning, transforming, and enriching data using tools like Apache Spark or Hadoop.
Data analysis: Analyzing data to extract insights and mak...
Spark join strategies include broadcast join, shuffle hash join, and shuffle sort merge join.
Broadcast join is used when one of the DataFrames is small enough to fit in memory on all nodes.
Shuffle hash join is used when joining two large DataFrames by partitioning and shuffling the data based on the join key.
Shuffle sort merge join is used when joining two large DataFrames by sorting and merging the data based on the j...
Spark is a fast and general-purpose cluster computing system for big data processing.
Spark is popular for its speed and ease of use in processing large datasets.
It provides in-memory processing capabilities, making it faster than traditional disk-based processing systems.
Spark supports multiple programming languages like Java, Scala, Python, and R.
It offers a wide range of libraries for diverse tasks such as SQL, strea...
Clustering is the process of grouping similar data points together. Pods are groups of one or more containers, while nodes are individual machines in a cluster.
Clustering is a technique used in machine learning to group similar data points together based on certain features or characteristics.
Pods in a cluster are groups of one or more containers that share resources and are scheduled together on the same node.
Nodes ar...
The duration of TCS Data Engineer interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 101 interview experiences
Difficulty level
Duration
based on 513 reviews
Rating in categories
Hyderabad / Secunderabad,
Bangalore / Bengaluru
+16-11 Yrs
Not Disclosed
System Engineer
1.1L
salaries
| ₹1 L/yr - ₹9 L/yr |
IT Analyst
65.6k
salaries
| ₹7.7 L/yr - ₹12.7 L/yr |
AST Consultant
53.5k
salaries
| ₹12 L/yr - ₹20.6 L/yr |
Assistant System Engineer
33.2k
salaries
| ₹2.5 L/yr - ₹6.4 L/yr |
Associate Consultant
32.9k
salaries
| ₹16.2 L/yr - ₹28 L/yr |
Amazon
Wipro
Infosys
Accenture