2017年2月26日星期日

Facebook Data Mining


    
Mining data from Facebook has been quite popular and useful in a few past years. The crawled or scraped data will be valuable and constructive for commercial, scientific, and many other fields of prediction and analysis, especially when these data is processed deeply, like data purge, machine learning. Without a doubt, data mining which serves as a basis tier crossing the whole data process is of paramount importance.
Facebook also has provided a serving website allowing those developers to access its data, since data enthusiasts express such intense interest in the data from Facebook, . This website has provided many simple and easy-to-grasp methods with detailed guidelines for users to learn and access to its resource.
Talking about this Facebook API which is known as Graph API, it is one kind of interface with REST (Representational State Transfer), which is based on the network architecture. It implies that Facebook calls functions by using remote methods, like HTTP, GET, POST to send messages and echo back REST service.
Take an Facebook example of Coca-Cola Corp., if users are intended to retrieve remarks posted on the graffiti wall, what they need to do is simply entering :
https://graph.facebook.com/cocacola/feed,then the system will return the data results in JSON file. JSON(JavaScript Object Notation) is one kind of data exchange format which is easy for users to handle, as well as easy for devices to analyze and generate. The data fields include the message ID, detailed info of data, author, author ID, and other kinds of info. Not only the graffiti wall, but also all other Facebook objects can use the following URL structure to retrieve what they want.
    {
       "error": {
                     "message": "Unknown path components: /CONNECTION_TYPE",
                     "type": "OAuthException",
                     "code": 2500,
                      "fbtrace_id": "AU3Q0qQUX1/"
        }
Here, we should note that we can only access to the data  when the objects are public, otherwise we should provide access token if the objects are defined as private.  
Users should feel happy to hear this: there has been a R package which is known as the Rfacebook Package. It provides an interface to the Facebook API. For mining Facebook using R, the Rfacebook package  provides functions that allow R to access Facebook’s API to get information about posts, comments, likes, group that mention specific keywords & much more. Then we can use the specific commands like below to search pages. Apart from R, there exists a portion of people getting used to Python. Here are also tips for reference. First of all, check out documentation on Facebook's Graph API https://developers.facebook.com/docs/reference/api/. If you are not familiar with JSON, DO read a tutorial on it (for instance http://secretgeek.net/json_3mins.asp). Once you grasp the concepts, start using this API. For Python, there are at several alternatives:                                                                                                                                 
  • facebook/python-sdk https://github.com/facebook/python-sdk 
  • pyFaceGraph https://github.com/iplatform/pyFaceGraph/
  • It is also semitrivial to write a simple HTTP client that uses the graph APIUsers are suggested to check out the Python libraries, try out the examples from their documentation and check if they have already done what you need to do. Compared with R, Python can simplify the data process procedure by saving time of code management, output and note files. While using R can optimize the graph visualization, since users can visualize friends on the Facebook.
                                                                                                         
There are still some data extraction tools for some people without any programming skills to scrape or crawl data from Facebook, like OctoparseVisual Scraper.                      
Octoparse:                                                                          
Octoparse is a powerful web scraper that can scrape both static and dynamic websites with AJAX, JavaScript, cookies and etc. First, you need to download the client end and then start with your scraping tasks. For this software, you needn’t have any programming skills, but you should learn some rules that has been set to help users to extract data. Plus, it has provided cloud service and proxy server setting functionality to prevent from IP block and accelerate the extracting process.      

  
Would like to know more, please visit http://www.octoparse.com/

Visual Scraper:
Visual Scraper is another great free web scraper with simple point-and-click interface and could be used to collect data from the web. You can get real-time data from several web pages and export the extracted data as CSV, XML, JSON or SQL files.                                                                                                           The freeware, which is available for Windows, enables you to scrape data from up to 50,000 web pages for only one user.                                                            Besides the SaaS, VisualScraper offer web scraping service such as data delivery services and createing software extractors services.

   
 If you want to know more, please visit http://www.visualscraper.com/pricing


Author: The Octoparse Team
- See more at: Octoparse Tutorial

标签: ,

0 条评论:

发表评论

订阅 博文评论 [Atom]

<< 主页