Featured Post

14 Top Data Pipeline Key Terms Explained

Image
 Here are some key terms commonly used in data pipelines 1. Data Sources Definition: Points where data originates (e.g., databases, APIs, files, IoT devices). Examples: Relational databases (PostgreSQL, MySQL), APIs, cloud storage (S3), streaming data (Kafka), and on-premise systems. 2. Data Ingestion Definition: The process of importing or collecting raw data from various sources into a system for processing or storage. Methods: Batch ingestion, real-time/streaming ingestion. 3. Data Transformation Definition: Modifying, cleaning, or enriching data to make it usable for analysis or storage. Examples: Data cleaning (removing duplicates, fixing missing values). Data enrichment (joining with other data sources). ETL (Extract, Transform, Load). ELT (Extract, Load, Transform). 4. Data Storage Definition: Locations where data is stored after ingestion and transformation. Types: Data Lakes: Store raw, unstructured, or semi-structured data (e.g., S3, Azure Data Lake). Data Warehous...

Internet Of Things Awesome Basics You Need to Read Now: Part-4

How propagator nodes will work in communicating devices in IOT: As noted previously, replicating even this highly efficient chirp protocol traffic indiscriminately throughout the IoT would clearly choke the network, so intelligence must be applied at levels above the individual end devices.

iot basics for beginnners
  • This is the responsibility of propagator nodes, which are devices that create an overarching network topology to organize the sea of machine-to-machine interactions that make up the Internet of Things.
  • Propagator nodes are typically a combination of hardware and software distantly similar to WiFi access points. They handle "local" end devices, meaning that they interact with end devices essentially within the (usually) wireless transmission range of the propagator node. They can be specialized or used to receive chirps from a wide array of end devices. Eventually, there would be tens or perhaps hundreds of thousands of propagator nodes in a city like Las Vegas. 
  • Propagator nodes will use their knowledge of adjacencies to form a near-range picture of the network. They will locate in-range nearby propagator nodes, as well as end devices and integrator functions either attached directly to or reached via those propagator nodes. This information is used to create the network topology: eliminating loops and creating alternate paths for survivability.
The propagator nodes will intelligently package and prune the various chirp messages before broadcasting them to adjacent nodes. Examining the public markers, the simple checksum, and the "arrow" of transmission (toward end devices or toward integrator functions), damaged or redundant messages will be discarded. Groups of messages that are all to be propagated via an adjacent node may be bundled into one "meta" message-a small data "stream"-for efficient transmission. Arriving "meta" messages may be unpacked and repacked.

Some classes of propagator nodes will contain a software publishing agent. This publishing agent interacts with particular integrator functions to optimize data forwarding on behalf of the integrator. Propagator nodes with publishing agents may be "biased" to forward certain information in particular directions based on routing instructions passed down from the integrator functions interested in communicating with a particular functional, temporal, or geographic "neighborhood" of end devices. 

It is the integrator functions that will dictate the overall communications flow based on their needs to get data or set parameters in a neighborhood of IoT end devices.

Comments

Popular posts from this blog

How to Fix datetime Import Error in Python Quickly

SQL Query: 3 Methods for Calculating Cumulative SUM

Big Data: Top Cloud Computing Interview Questions (1 of 4)