Tag Archives: Scraping

Scraping Google Trends with Mechanize and Hpricot

This is a small Ruby script that fetches the 100 trends of the day for a specific date. If multiple dates are searched, one can find out how many times a keyword occurred between two dates, or just find out what keywords are constantly appearing on the top 100 list. The script is incomplete and one must implement the “implement me!” methods to get full functionality. This, in its current state, should serve as a good starting point for scraping Google Trends.

On a technical note, it’s using mechanize, hpricot, tempfile (for the cache). A lot of this is just copy & paste programming from the earlier anime scraper.

To grab the gems (rdoc takes 10x as long as the gem to fetch and install):

sudo gem install mechanize --no-rdoc
sudo gem install hpricot --no-rdoc
#!/usr/bin/env ruby
# biodegradablegeek.com
# public domain
#

require 'rubygems'
require 'hpricot'
require 'tempfile'
require 'mechanize'
#require 'highline/import'
#HighLine.track_eof = false

$mech = WWW::Mechanize.new
$mech.user_agent_alias = 'Mac Safari'
$master = []

def puts2(txt=''); puts "*** #{txt}"; end

class Cache
  def initialize
    # Setup physical cache location
    @path = 'cache'
    Dir.mkdir @path unless File.exists? @path

    # key/val = url/filename (of fetched data)
    @datafile = "#{@path}/cache.data"
    @cache = load @datafile
  end

  def put key, val
    tf = Tempfile.new('googletrends', @path)
    path = tf.path
    tf.close! # important!

    puts2 "Saving to cache (#{path})"
    open(path, 'w') { |f|
      f.write(val)
      @cache[key] = path
    }

    save @datafile
  end

  def get key
    return nil unless exists?(key) && File.exists?(@cache[key])
    open(@cache[key], 'r') { |f| f.read }
  end

  def files
    @cache.values
  end

  def first
    @cache.first
  end

  def exists? key
    @cache.has_key? key
  end

private
  # Load saved cache
  def load file
    return File.exists?(file) ? YAML.load(open(file).read) : {}
  end

  # Save cache
  def save path
    open(path, 'w') { |f|
      f.write @cache.to_yaml
    }
  end
end

$cache = Cache.new

def fetch(url)
  body = $mech.get(url).body()
  $cache.put(url, body)
  body
end

def getPage(url)
  body = $cache.get(url)

  if body.nil?
    puts "Not cached. Fetching from site..."
    body = fetch url
  end
  body
end

def loadState
  mf = 'cache/master.data'
  $master = File.exists?(mf) ? YAML.load(open(mf).read) : {}
  $master = {} if $master==false
end

def saveState
  open('cache/master.data', 'w+') { |f|
    f.write $master.to_yaml
  }
end

def main
  #loadState

  # Grab top 100 Google Trends (today)
  #date = Time.now.strftime '%Y-%m-%d'
  date = '2009-01-21'

  puts2 "Getting Google's top 100 search trends for #{date}"
  url = "http://www.google.com/trends/hottrends?sa=X&date=#{date}"
  puts2 url

  begin
    body = getPage(url)
  rescue WWW::Mechanize::ResponseCodeError
    puts2 "Couldn't fetch URL. Invalid date..?"
    exit 5
  end

  puts2 "Fetched page (#{body.size} bytes)"

  if body['There is no data on date']
    puts2 'No data available for this date.'
    puts2 'Date might be too old or too early for report, or just invalid'
    exit 3
  end

  doc = Hpricot(body)

  (doc/"td[@class='hotColumn']/table[@class='Z2_list']//tr").each do |tr|
    td = (tr/:td)
    num = td[0].inner_text.sub('.','').strip
    kw = td[1].inner_text
    url = (td[1]/:a).first[:href]
    Keyword.find_or_new(kw) << Occurance.new(num, date, url)
  end
  puts "Got info on #{$master.size} keywords for #{date}"
  puts "keyword '#{$master.first.name}' occured #{$master.first.occurances} times"
end

class Occurance
  attr_accessor :pos, :date, :url
  def initialize(pos, date, url)
    @pos = pos
    @date = date
    @url = url
  end
end

class Keyword
  attr_accessor :name, :occurances
  def initialize(name)
    @name = name
    @occurances = []
    @position_average = nil
    @count = nil
    $master << self
  end

  def self.find_or_new(name)
    x = $master.find { |m| name==m.name }
    x || Keyword.new(name)
  end

  def << occurance
    @occurances << occurance
  end

  def occured_on? datetime
    raise 'implement me'
  end

  def occured_between? datetime
    raise 'implement me'
  end

  def occurances datetime=nil
    raise 'implement me' if datetime
    @occurances.size
  end

  def occurances_between datetime
    raise 'implement me'
  end

  def pos_latest
    @occurances.last.date
  end

  def pos_average
    @position_average
  end

  def pos_average_between datetime
    raise 'implement me'
  end
end

#   Instance= [num, date, url]
#   Keyword=[Instance, Intance, Instance]
#   Methods for keywords:
#   KW.occured_on? date
#   KW.occured_between? d1, d2
#   KW.occurances
#   KW.occurances_between? d1, d2
#   KW.pos_latest
#   KW.pos_average
#   KW.pos_average_between

#   KW has been on the top 100 list KW.occurances.size times
#   The #1 keywords for the month of January: Master.sort_by KW.occurances_between? Jan1,Jan31.pos_average_between Jan1,Jan31
#
#   Top keywords: sort by KW.occurances.size = N keyword was listed the most.
#   Top keywords for date D: Master.sort_by KW.occured_on (x).num

main

AnimeCrazy Scraper Example Using Hpricot & Mechanize

This is a little (as of now incomplete) scraper I wrote to grab all the anime video code off of AnimeCrazy (dot) net. This site doesn’t host any videos on its own server, but just embeds ones that have been uploaded to other sites (Megavideo, YouTube, Vimeo, etc). I don’t know who the original uploaders of the videos are, but I’ve seen this same collection of anime links being used on some other sites. This site has about 10,000 episodes/parts (1 movie may have 6+ parts). The scraper below was only tested with “completed anime shows” and got around 6300 episodes. The remaining content (anime movies and running anime shows) should work as-is, but I personally held off on getting those because I want to examine them closely to try cleaning up the inconsistencies as much as possible.

This scraper needs some initial setup and won’t work out of the box, but I’m including it here in the hopes that it will serve as a decent example of a small real world scraper, if you’re looking to learn the basics of scraping with Hpricot and Mechanize. Let me know if you find any use for it. I will update the posted code later this week when I have time to complete it and add some more features.

There’s one major problem with the organization of episodes on AnimeCrazy, and it’s the fact that some episodes are glued together into one post. Right now the scraper stops and asks you how to proceed when it comes across such a post. You basically need to tell the scraper if a post (page) contains 1 episode (video) or multiple. If there’s 1, it proceeds on its own, but if there’s two, it requires that you give it the names and links of each individual episode (part1 and part2 usually). Sometimes 2 episodes are together in 1 video. Sorta like those music albums on KaZaA or LimeWire that are basically ripped as one huge mp3 instead of individual songs.

This only accounts for maybe 30-40 out of 6000 videos, and it’s not that big of a deal because the amount of work needed to proceed with the scraping is small, but it IS work, and is a bitch slap to the entire concept of automation, but coding around the issue is a major hassle and there would still be a high chance that some inconsistencies will still come through. It would be far less work to just find another anime site which is far more consistent, though the reason animecrazy is good is because it’s active, and the site IS updated manually these days, as far as I can tell.

BTW, Why The Lucky Stiff rocks, and Hpricot is amazing. But the serious scrapologist should consider scrAPI or sCRUBYt (uses Hpricot) for big projects.


#!/usr/bin/env ruby
# License: Public domain. Go sell it to newbs on DigitalPoint.

require 'rubygems'
require 'hpricot'
require 'mechanize'
require 'tempfile'
require 'highline/import'
HighLine.track_eof = false

$mech = WWW::Mechanize.new
$mech.user_agent_alias = 'Mac Safari'

###############################
$skip_until = false
DEBUG=false
###############################

def debug?
  DEBUG
end

def puts2(txt='')
  puts "*** #{txt}"
end

#  Anime has: title, type (series, movie), series
#  Episode has name/#, description, parts (video code)

class Episode
  attr_accessor :name, :src, :desc, :cover
  def initialize(title, page)
    @src = page # parts (megavideo, youtube etc)
    @name = title
    @desc = nil # episode description
    @cover = nil # file path
  end
end

class Anime
  attr_accessor :name, :page, :completed, :anime_type, :episodes
  def initialize(title, page)
    @name = title
    @page = page
    @episodes = []
    @anime_type = 'series'
    @completed = false
  end

  def complete!
    @completed = true
  end

  def episode! episode
    @episodes &lt;&lt; episode
  end
end

class Cache
  def initialize
    # Setup physical cache location
    @path = 'cache'
    Dir.mkdir @path unless File.exists? @path

    # key/val = url/filename (of fetched data)
    @datafile = "#{@path}/cache.data"
    @cache = load @datafile
    #puts @cache.inspect
  end

  def put key, val
    tf = Tempfile.new('animecrazy', @path)
    path = tf.path
    tf.close! # important!

    puts2 "Saving to cache (#{path})"
    open(path, 'w') { |f|
      f.write(val)
      @cache[key] = path
    }

    save @datafile
  end

  def get key
    return nil unless exists?(key) &amp;&amp; File.exists?(@cache[key])
    open(@cache[key], 'r') { |f| f.read }
  end

  def exists? key
    @cache.has_key? key
  end

private
  # Load saved cache
  def load file
    return File.exists?(file) ? YAML.load(open(file).read) : {}
  end

  # Save cache
  def save path
    open(path, 'w') { |f|
      f.write @cache.to_yaml
    }
  end
end

$cache = Cache.new

def fetch(url)
  body = $mech.get(url).body()
  $cache.put(url, body)
  body
end

def getPage(url)
  # First let's see if this is cached already.
  body = $cache.get(url) 

  if body.nil?
    puts "Not cached. Fetching from site..."
    body = fetch url
  end
  body
end

def main
  # Open anime list (anime_list = saved HTML of sidebar from animecrazy.net)
  anime_list = Hpricot(open('anime_list', 'r') { |f| f.read })
  puts2 "Anime list open"

  # Read in the URL to every series
  masterlist = []

  (anime_list/:li/:a).each do |series|
    anime = Anime.new(series.inner_text, series[:href])
    masterlist &lt;&lt; anime
    puts2 "Built structure for #{anime.name}..."
  end

  puts2

  puts2 "Fetched #{masterlist.size} animes. Now fetching episodes..."
  masterlist.each do |anime|
    puts2 "Fetching body (#{anime.name})"
    body = getPage(anime.page)
    puts2 "Snatched that bitch (#{body.size} bytes of Goku Goodness)"
    puts2

    doc = Hpricot(body)
    (doc/"h1/a[@rel='bookmark']").each do |episode|
      name = clean(episode.inner_text)

      if $skip_until
        #$skip_until = !inUrl(episode[:href], 'basilisk-episode-2')
        #$skip_until = nil == name['Tsubasa Chronicles']
        puts2 "Resuming from #{episode[:href]}" if !$skip_until
        next
      end

      # Here it gets tricky. This is a major source of inconsistencies in the site.
      # They group episodes into 1 post sometimes, and the only way to find
      # out from the title of the post is by checking for the following patterns
      # (7 and 8 are example episode #s)
      # X = 7+8, 7 + 8, 7 and 8, 7and8, 7 &amp; 8, 7&amp;8

      # If an episode has no X then it is 1 episode.
      # If it has multiple parts, they are mirrors.
      if single_episode? name
        begin

          puts2 "Adding episode #{name}..."
          ep = Episode.new(name, episode[:href])
          ep.src = getPage(episode[:href])
          anime.episode! ep
        rescue WWW::Mechanize::ResponseCodeError
          puts2 "ERROR: Page not found? Skipping..."
          puts name
          puts2 episode[:href]
        end
      else
        # If an episode DOES have X, it *may* have 2 episodes (but may have mirrors, going up to 4 parts/vids per page).
        # Multiple parts will be the individual episodes in chronological order.
        puts2 "Help me! I'm confused @ '#{name}'"
        puts2 "This post might contain multiple episodes..."

        puts2 "Please visit this URL and verify the following:"
        puts episode[:href]

        if agree("Is this 1 episode? yes/no ")
          begin
            puts2 "Adding episode #{name}..."
            ep = Episode.new(name, episode[:href])
            ep.src = getPage(episode[:href])
            anime.episode! ep
          rescue WWW::Mechanize::ResponseCodeError
            puts2 "ERROR: Page not found? Skipping..."
            puts name
            puts2 episode[:href]
          end
        else
          more = true
          while more
            ename = ask("Enter the name of an episode: ")
            eurl =  ask("Enter the URL of an episode: ")

            begin
              puts2 "Adding episode #{ename}..."
              ep = Episode.new(name, episode[:href])
              ep.src = getPage(episode[:href])
              anime.episode! ep
            rescue WWW::Mechanize::ResponseCodeError
              puts2 "ERROR: Page not found? Skipping..."
              puts name
              puts2 episode[:href]
            end
            more = agree("Add another episode? Y/N")
          end
          puts2 "Added episodes manually... moving on"
        end
      end
    end
    anime.complete!
    # XXX save the entire anime object, instead of just cache
  end
end

def inTitle(document, title)
  return (document/:title).inner_text[title]
end

def inUrl(url, part)
  return url[part]
end

def single_episode?(name)
  !(name =~ /[0-9] ?([+&amp;]|and) ?[0-9]/)
end

def clean(txt)
  # This picks up most of them, but some are missing. Like *Final* and just plain "Final"
  txt[' (Final)']='' if txt[' (Final)']
  txt[' (Final Episode)']='' if txt[' (Final Episode)']
  txt[' (FINAL)']='' if txt[' (FINAL)']
  txt[' (FINAL EPISODE)']='' if txt[' (FINAL EPISODE)']

  txt['(Final)']='' if txt['(Final)']
  txt['(Final Episode)']='' if txt['(Final Episode)']
  txt['(FINAL)']='' if txt[' (FINAL)']
  txt['(FINAL EPISODE)']='' if txt[' (FINAL EPISODE)']

  txt
end

main

If you’re writing your own scraper and would like to use the minimal caching functionality present below, you can gut everything in main() out and put in your own code. Feel free to contact me for assistance.

Here is some sample output:
Continue reading AnimeCrazy Scraper Example Using Hpricot & Mechanize

How to POST Form Data Using Ruby

POSTing data on web forms is essential for writing tools and services that interact with resources already available on the web. You can grab information from your Gmail account, add a new thread to a forum from your own app, etc.

The following is a brief example on how this can be done in Ruby using Net::HTTPand this POST form example.

Looking at the source (interlacken.com/webdbdev/ch05/formpost.asp):

<form method="POST" action="formpost.asp">
<p><input type="text" name="box1″ size="20″ value="">
<input type="submit" value="Submit" name="button1″></p>
</form>

We see two attributes are sent to the formpost.asp script when the user hits the submit button: A textbox named box1 and the value of the submit button, named Submit. If this form used a GET method, we would just fetch the URL postfixed with (for example) ?box1=our+text+here. Fortunately, Ruby’s Net::HTTP makes posting data just as easy.

The Ruby code:

#!/usr/bin/ruby

require "uri"
require "net/http"

params = {'box1′ => 'Nothing is less important than which fork you use. Etiquette is the science of living. It embraces everything. It is ethics. It is honor. -Emily Post',
'button1′ => 'Submit'
}
x = Net::HTTP.post_form(URI.parse('http://www.interlacken.com/webdbdev/ch05/formpost.asp'), params)
puts x.body

# Uncomment this if you want output in a file
# File.open('out.htm', 'w') { |f| f.write x.body }

Sending the value of button1 is optional in this case, but sometimes this value is checked in the server side script. One example is when the coder wants to find out if the form has been submitted – as opposed to it being the user’s first visit to the form – without creating a hidden attribute to send along w/ the other form fields. Besides, there’s no harm in sending a few more bytes.

If you’re curious about URI.parse, it simply makes the URI easier to work with by separating and classifying each of its attributes, effectively letting the methods in Net::HTTP do their sole job only, instead of having to analyze and parse the URL. More info on this in the Ruby doc.

Assuming no errors, running this example (ruby postpost or chmod a+x postpost.rb; ./postpost.rb) yields:

<form method="POST" action="formpost.asp">
<p><input type="text" name="box1″ size="20″ value="NOTHING IS LESS
IMPORTANT THAN WHICH FORK YOU USE. ETIQUETTE IS THE
SCIENCE OF LIVING. IT EMBRACES EVERYTHING. IT IS ETHICS.
IT IS HONOR. -EMILY POST">
<input type="submit" value="Submit" name="button1″></p>
</form>

In practice, you might want to use a more specialized library to handle what you’re doing. Be sure to check out Mechanize and Rest-client.