Featured Post

14 Top Data Pipeline Key Terms Explained

Image
 Here are some key terms commonly used in data pipelines 1. Data Sources Definition: Points where data originates (e.g., databases, APIs, files, IoT devices). Examples: Relational databases (PostgreSQL, MySQL), APIs, cloud storage (S3), streaming data (Kafka), and on-premise systems. 2. Data Ingestion Definition: The process of importing or collecting raw data from various sources into a system for processing or storage. Methods: Batch ingestion, real-time/streaming ingestion. 3. Data Transformation Definition: Modifying, cleaning, or enriching data to make it usable for analysis or storage. Examples: Data cleaning (removing duplicates, fixing missing values). Data enrichment (joining with other data sources). ETL (Extract, Transform, Load). ELT (Extract, Load, Transform). 4. Data Storage Definition: Locations where data is stored after ingestion and transformation. Types: Data Lakes: Store raw, unstructured, or semi-structured data (e.g., S3, Azure Data Lake). Data Warehous...

5 AWS Tricky Interview Questions

Below are the top tricky questions asked in AWS interviews


5 Tricky Questions on Amazon Web services for IT developers.


1) How do you store objects in Reduced Redundancy Storage in S3?

  • Specify REDUCED_REDUNDANCY tags in the HTML code.
  • Specify REDUCED_MIRRORING on a PUT request using an optional header.
  • Specify LOW_REPLICATION on a PUT request using an optional header.
  • Specify LOW_REPLICATION on a GET request for objects.
  • Specify REDUCED_REDUNDANCY on a PUT request using an optional header.


2) Which is required to log on to the AWS management console on a mobile device?


  • SSO alias
  • SiteMinder
  • Gateway address
  • IAM token
  • AWS account


3) How do you receive notifications based on the value of a metric being above or below a stated threshold in Amazon CloudWatch?


  • Set Amazon SmartAlert.
  • Log on through the Amazon console.
  • Create an Amazon CloudWatch threshold dashboard.
  • Create an Amazon CloudWatch alarm.
  • Create a DescribeAlarmHistory API.


4) Scenario-You are testing Oracle, SQL Server, and Mongo DB databases in your AWS account to determine which database to use for your application. Question-Based on the scenario above, how do you decrease the processing time of testing these databases?

  • Transfer the installed files and folders for each database from your local computer to S3.
  • Set up Database as a Service (DaaS) from these vendors.
  • Download DB software directly to Amazon from the vendor site.
  • Configure the File Transfer Protocol (FTP) for the database software from your local computer to S3.
  • Select vendor-provided database AMIs.


5) How do you verify your Amazon CloudSearch application is ready for release to end-users?

  • Look for error logs in the Amazon CloudSearch diagnostics sections.
  • Set up a search engine task using the IP address of the CloudSearch server.
  • Look at the keywords from the indexes in Amazon CloudSearch to ensure readiness.
  • Test sample queries using the search tester in the Amazon CloudSearch console.
  • Create APIs to test the output from Amazon CloudSearch.
  • Answers Click here

Comments

  1. Thanks for this amazing post Amazon Interview questions keep changing, but the objective of testing candidate's problem solving ability is constant.

    ReplyDelete

Post a Comment

Thanks for your message. We will get back you.

Popular posts from this blog

SQL Query: 3 Methods for Calculating Cumulative SUM

Big Data: Top Cloud Computing Interview Questions (1 of 4)

How to Fix datetime Import Error in Python Quickly