Linkedin Data scraping with BeautifulSoup
Today I would like to do some web scraping of Linkedin job postings, I have two ways to go: - Source code extraction - Using the Linkedin API
I chose the first option, mainly because the API is poorly documented and I wanted to experiment with BeautifulSoup. BeautifulSoup in few words is a library that parses HTML pages and makes it easy to extract the data.
Official page: BeautifulSoup web page
Now that the functions are defined and libraries are imported, I’ll get job
postings of linkedin.
The inspection of the source code of the page shows indications where to access
elements we are interested in.
I basically achieved that by ‘inspecting elements’ using the browser.
I will look for “Data scientist” postings. Note that I’ll keep the quotes in my
search because otherwise I’ll get unrelevant postings containing the words
“Data” and “Scientist”.
Below we are only interested to find div element with class ‘results-context’,
which contains summary of the search, especially the number of items found.
Now let’s check the number of postings we got on one page
To be able to extract all postings, I need to iterate over the pages, therefore I will proceed with examining the urls of the different pages to work out the logic.
-
url of the first page
-
https://www.linkedin.com/jobs/search?keywords=Data+Scientist&locationId=fr:0&s tart=0&count=25&trk=jobs_jserp_pagination_1
-
second page
-
https://www.linkedin.com/jobs/search?keywords=Data+Scientist&locationId=fr:0&s tart=25&count=25&trk=jobs_jserp_pagination_2
-
third page
-
https://www.linkedin.com/jobs/search?keywords=Data+Scientist&locationId=fr:0&s tart=50&count=25&trk=jobs_jserp_pagination_3
there are two elements changing :
- start=25 which is a product of page number and 25
- trk=jobs_jserp_pagination_3
I also noticed that the pagination number doesn’t have to be changed to go to next page, which means I can change only start value to get the next postings (may be Linkedin developers should do something about it …)
As I mentioned above, all the information about where to find the job details are made easy thanks to source code viewing via any browser
Next, it’s time to create the data frame
Now the table is filled with the above columns.
Just to verify, I can check the size of the table to make sure I got all the
postings
In the end, I got an actual dataset just by scraping web pages. Gathering data
never have been as easy.
I can even go further by parsing the description of each posting page and
extract information like:
- Level
- Description
- Technologies
…
There are no limits to which extent we can exploit the information in HTML pages thanks to BeautifulSoup, you just have to read the documentation which is very good by the way, and get to practice on real pages.
Ciao!