Web Scraping Language (WSL)

WSL is declarative domain specific language for the web. Automate any web browser actions like following a bunch of links (aka crawling) and extracting data from each page that loads, filling out forms. Each action runs in order, separated via pipe operator >>


- You reference element(s) on the page with CSS or XPATH SELECTOR
- Extract data via JSON { 'product': '.title', 'usd': '.price', column3...etc}

Web Crawling Scenarios

Crawl URL Params

Format: GOTO URL[range] >> EXTRACT {json}
Example: GOTO github.com/search?p={{1-3}}&q={{['cms', 'chess', 'minecraft]}}' >> EXTRACT {'title': 'h3'}

3 pages X 3 keywords = 9 URL permutations will be crawled and data extracted.

Crawl & Extract

Example: GOTO en.wikipedia.org/wiki/List_of_Dexter_episodes >> CRAWL .summary a >> EXTRACT {'title': 'h1', 'code': '//tr[7]/td'}

Follow each Dexter episode link and get the title and production code.

Paginated Crawl

Format: CRAWL [strategy, pageEnd, pageStart] SELECTOR
1. Clicking 'next page' element that runs the crawl again on subsequent pages.
2. Mouse-wheel scroll to load next page.
3. Clicking numbered elements that load the next page.

1. GOTO news.ycombinator.com >> CRAWL ['.morelink'] .hnuser
GOTO news.ycombinator.com >> CRAWL ['.morelink',4] .hnuser
1st ex: Crawling all pages using the .moreLink until it can't find this element.
2nd ex: Navigating via .moreLink until the 4th page is reached.
2. GOTO news.ycombinator.com >> CRAWL ['autoscroll',2] .hnuser
GOTO news.ycombinator.com >> CRAWL ['autoscroll',4,3] .hnuser
1st ex: Crawling past the first page by scrolling down one page length, 2 times.
2nd ex: Navigating to the 3rd page first and continues crawling until the 4th page.
3. GOTO news.ycombinator.com >> CRAWL ['number'] .hnuser
GOTO news.ycombinator.com >> CRAWL ['number',4,3] .hnuser
1st ex: Finding a numbered link or element and increment exhaustively.
2nd ex: Navigating to the 3rd page via finding & clicking the numbered link until the 4th.
Extract Rows

Example: GOTO en.wikipedia.org/wiki/List_of_Dexter_episodes >> EXTRACT { 'title': 'h1', 'aired': '//table[2]//tr[2]/td[5]' } IN .wikiepisodetable

Extracts every Dexter episode's title and air date under parent element with class "wikiepisodetable".

Paginated Extract

Format: GOTO URL >> EXTRACT ['.selector', limit] {json}
Example: GOTO news.ycombinator.com >> EXTRACT ['.morelink',2] {'news': '.storylink'}

Continues extracting every news headline on every page until the 2nd page.

Nested Crawls

Example: GOTO github.com/marketplace >> CRAWL nav/ul/li/a >> crawl .h4 >> extract {'app': 'h1', 'langue': '.py-3 .d-block'}

Follows the category links, and all the apps on first page of results, extract the app name, and supported languages. Crawls recursively!

Typing Text

Format: GOTO URL >> TYPE ['keyword1', 'keyword2'...] IN SELECTOR >> TYPE [KEY_...] >> EXTRACT ...
Extract: GOTO github.com/search?q= >> type ["time", "security", "social"] IN input[@name="q"] >> TYPE ["KEY_ENTER"] >> extract {"search url": ".text-gray-dark.mr-2"}

For each keyword, we send a "KEY_ENTER" to submit the search form via the return key. Then we crawl the
first page of search results and scrape each results url to a data column key named "search url".

Clicking & Forms

Format: CLICK[n-index] SELECTOR
1. Find elements with selector and click the Nth element, note you can just use Xpath for the selector!
2. Try out every permutation possible for selected forms, crawling dropdown forms etc.
3. Click a link and execute macro action like downloading a file.

1. GOTO news.ycombinator.com/login >> CLICK input >> CLICK input[last()] >> CLICK input[3] >> CLICK[3] input

Click first element then click the last element and finally two methods for selecting the same 3rd element.

2. GOTO redux-form.com/6.6.3/examples/simple/ >> type ['user1@x.com', 'user2@x.com'] IN input[@name="email"] >> CRAWL select/options

For each email address inputted, we try every options for it.

3. GOTO https://www.putty.org >> CLICK //tr[1]/td[2]/p[2]/a >> DOWNLOAD //div[1]/div[2]/span[2]

Clicks on a link that navigates to a different domain. We save the file to with the macro command wrapped around double underscrolls: __SAVE__.

email: support@scrape.it
© Brilliant Code Inc. 1918 Boul Saint Régis, Dorval, Québec, Canada. Libérons l'information par la collaboration!