Posts

Showing posts with the label PySpark

Featured Post

14 Top Data Pipeline Key Terms Explained

Image
 Here are some key terms commonly used in data pipelines 1. Data Sources Definition: Points where data originates (e.g., databases, APIs, files, IoT devices). Examples: Relational databases (PostgreSQL, MySQL), APIs, cloud storage (S3), streaming data (Kafka), and on-premise systems. 2. Data Ingestion Definition: The process of importing or collecting raw data from various sources into a system for processing or storage. Methods: Batch ingestion, real-time/streaming ingestion. 3. Data Transformation Definition: Modifying, cleaning, or enriching data to make it usable for analysis or storage. Examples: Data cleaning (removing duplicates, fixing missing values). Data enrichment (joining with other data sources). ETL (Extract, Transform, Load). ELT (Extract, Load, Transform). 4. Data Storage Definition: Locations where data is stored after ingestion and transformation. Types: Data Lakes: Store raw, unstructured, or semi-structured data (e.g., S3, Azure Data Lake). Data Warehous...

AWS CLI PySpark a Beginner's Comprehensive Guide

Image
AWS (Amazon Web Services) and PySpark are separate technologies, but they can be used together for certain purposes. Let me provide you with a beginner's guide for both AWS and PySpark separately. AWS (Amazon Web Services): Amazon Web Services (AWS) is a cloud computing platform that offers a wide range of services for computing power, storage, databases, machine learning, analytics, and more. 1. Create an AWS Account: Go to the AWS homepage. Click on "Create an AWS Account" and follow the instructions. 2. Set Up AWS CLI: Install the AWS Command Line Interface (AWS CLI) on your local machine. Configure it with your AWS credentials using AWS configure. 3. Explore AWS Services: AWS provides a variety of services. Familiarize yourself with core services like EC2 (Elastic Compute Cloud), S3 (Simple Storage Service), and IAM (Identity and Access Management). PySpark: PySpark is the Python API for Apache Spark, a fast and general-purpose cluster computing system. It allows you ...

How to Handle Spaces in PySpark Dataframe Column

Image
In PySpark, you can employ SQL queries by importing your CSV file data to a DataFrame. However, you might face problems when dealing with spaces in column names of the DataFrame. Fortunately, there is a solution available to resolve this issue. Reading CSV file to Dataframe Here is the PySpark code for reading CSV files and writing to a DataFrame. #initiate session spark = SparkSession.builder \ .appName("PySpark Tutorial") \ .getOrCreate() #Read CSV file to df dataframe data_path = '/content/Test1.csv' df = spark.read.csv(data_path, header=True, inferSchema=True) #Create a Temporary view for the DataFrame df2.createOrReplaceTempView("temp_table") #Read data from the temporary view spark.sql("select * from temp_table").show() Output --------+-----+---------------+---+ |Student| Year|Semester1|Semester2| | ID | | Marks | Marks | +----------+-----+---------------+ | si1 |year1|62.08| 62.4| | si1 |year2|75.94| 76.75| | si...