Featured Post

Python: Built-in Functions vs. For & If Loops – 5 Programs Explained

Image
Python’s built-in functions make coding fast and efficient. But understanding how they work under the hood is crucial to mastering Python. This post shows five Python tasks, each implemented in two ways: Using built-in functions Using for loops and if statements ✅ 1. Sum of a List ✅ Using Built-in Function: numbers = [ 10 , 20 , 30 , 40 ] total = sum (numbers) print ( "Sum:" , total) 🔁 Using For Loop: numbers = [ 10 , 20 , 30 , 40 ] total = 0 for num in numbers: total += num print ( "Sum:" , total) ✅ 2. Find Maximum Value ✅ Using Built-in Function: values = [ 3 , 18 , 7 , 24 , 11 ] maximum = max (values) print ( "Max:" , maximum) 🔁 Using For and If: values = [ 3 , 18 , 7 , 24 , 11 ] maximum = values[ 0 ] for val in values: if val > maximum: maximum = val print ( "Max:" , maximum) ✅ 3. Count Vowels in a String ✅ Using Built-ins: text = "hello world" vowel_count = sum ( 1 for ch in text if ch i...

10 Excusive Steps You need for Web Scrapping

Here're ten Python technics to clean the scraped data. The scraped Text has unwanted hidden data. So, as part of cleaning it try to remove these ten in your data.

10 Steps for Web scrapping

Data is prime input for text analytics projects. After cleaning, you can feed to Machine/Deep Learning systems.
  1. Removing HTML tags
  2. Tokenization
  3. Removing unnecessary tokens and stop-words
  4. Handling contractions
  5. Correcting spelling errors
  6. Stemming
  7. Lemmatization
  8. Tagging
  9. Chunking
  10. Parsing

10 Technics to Clean Text in Python
10 Technics to Clean Text in Python


1. Removing HTML tags

The unstructured text contains a lot of noise ( data from web pages, blogs, and online repositories.)when you use web/screen scraping. 

The HTML tags, JavaScript, and Iframe tags typically don't add much value to understanding and analyzing text. Our purpose is to remove HTML tags, and other noise.


2. Tokenization

  • Tokens are independent and minimal textual components. And have a definite syntax and semantics. A paragraph of text or a text document has several elements. Those you can further break down into clauses, phrases, and words. 
  • The popular tokenization techniques include sentence and word tokenization. These, you can use to break down a text document (or corpus) into sentences. And each sentence into words. 
  • Thus, tokenization is the process of breaking down or splitting textual data into smaller and more meaningful components called tokens.


Python is popular in Text analytics. Here, you will find various cleaning techniques used in text analytics.
Text Analytics in Python 


3. Removing Unnecessary Stop Words

Stopwords have little or no significance and are usually removed from a text when processing it. These usually occur most frequently if you aggregate a corpus of text based on the particular tokens and their frequencies. Words like "a," "the," "and," and so on are stopwords.


4. Handling contractions

The best examples of contractions are you'll, it's, etc.

5. Correcting spelling errors

Auto correcting spelling errors. While doing a Google search, you will find it corrects your spelling automatically.

6. Stemming

Here you can reduce words to the root level. The best example is Snowball, this you stem it to root level as Snow and Ball.


7. Lemmatization

Based on the context, bring the words to the root level, and make them meaningful.


8. Tagging

This is the concept of group particular words under a Tag.


9. Chunking

It is of constructing from various words of Verbs, Nouns, Adjectives, etc. Check out here on Data Chunking.


10. Parsing

The data will pass through some syntax rules. The output will then feed to Machine Learning systems. The syntax rules vary from project to project.

Comments

Popular posts from this blog

SQL Query: 3 Methods for Calculating Cumulative SUM

5 SQL Queries That Popularly Used in Data Analysis

Big Data: Top Cloud Computing Interview Questions (1 of 4)