-
PySpark DataFrame: Counting NULL Values in Each Column
To count the number of NULL values in each column of a PySpark DataFrame, you can use the isNull() function. Use this function with the agg method to compute the counts. PySpark’s isNull() method checks for NULL values, and then you can aggregate these checks to count them. Counting NULL… Read More ⇢
-
PySpark DataFrame: Common Operations Cheat Sheet
In PySpark, many methods are directly available on DataFrame objects and other classes, so no separate import is needed. Here’s a cheat sheet of common PySpark methods. 1. DataFrame Methods These methods are directly available on DataFrame objects: 2. Spark Session Methods These methods are directly available on the SparkSession… Read More ⇢
-
Parquet vs ORC vs Avro: Top Differences Explained
This content compares the performance and features of three data formats: Parquet, ORC, and AVRO. Parquet and ORC are columnar formats optimizing storage and query performance, while AVRO is row-oriented, supporting schema evolution for varied workloads. Each format is suited for specific big data applications, emphasizing efficiency and compatibility. Read More ⇢
-
AWS Step Functions and AWS Glue Job Workflow Configuration
Here’s how you can set up an architecture. An Amazon S3 file upload triggers an AWS Lambda function via Amazon EventBridge (formerly known as CloudWatch Events). This function then starts an AWS Step Function workflow. This workflow triggers an AWS Glue job. Step-by-Step Overview Step 1: Configure S3 Bucket to… Read More ⇢
-
AWS: 3 Easy to Write Lambda Functions
Here are three examples of AWS Lambda functions for different use cases. These include the hello world function, image resizing, and fetching data from DynamoDB. 1. Basic Hello World Function This is a simple AWS Lambda function that returns a “Hello, World!” message. It’s often used in the AWS Lambda to understand the basics. def… Read More ⇢
-
How to Delete Source Object After Glue Job Run Complete
Deleting S3 objects post-Glue job streamlines data management, frees up space, and maintains a clean dataset for analysis. Read More ⇢
-
CSV Column Validation Using PySpark: Step-by-Step Guide
The Python code demonstrates CSV file validation using PySpark. Validation rules are applied to columns, and the resulting dataframes are written to S3 and PgSQL. Read More ⇢
-
20 Python Pandas Interview Questions and Answers
Pandas is a data manipulation library for Python, offering Series, DataFrame, CSV, merging, grouping, and visualization capabilities. Read More ⇢
-
Group By Vs Partition By: Here’s the Right Answer
SQL uses GROUP BY to aggregate data into summary rows, while PARTITION BY aids window functions in dividing result sets. Read More ⇢









