Tag Archives: Automation

Bash Script to Force an Empty Git Push

Sometimes, like when you’re testing hooks or trying to create synced remote and local repos, you’ll find yourself touching empty files just to get a git push going. This script automates this task by creating a unique temporary file, committing it, pushing, and then removing the file.

#!/bin/sh
TMP=tmp-`date +'%m%s'`
touch $TMP
git add $TMP
git commit $TMP -m '(forced push)'
git push
git rm $TMP
</pre>
Usage, assuming you named it git-force and made it executable (chmod)
<pre lang="bash">cd git-repo/
./git-force

I place this in ~/bin/ which is in my $PATH. You might want to if you use this a lot.

How to Maintain Static Sites with Git & Jekyll

Static sites in this context just means non-database driven sites. Your static site can be an elaborate PHP script or just a few markup and image files. For this I am using Jekyll – A neat Ruby gem that makes your static sites dynamic. It lets you create layouts and embed custom variables in your HTML (this is a “prototype” of the site).

Jekyll tackles all the nuisances involved in creating static pages (I used to add just enough PHP to make a layout). It works by running your prototype through some parsers and outputs plain static HTML/XML (RSS feeds) etc. It’s perfect for lightweight sites that would be impractical on WordPress, like a few static pages of information, landing pages, portfolio/resume pages, and parked domains.

Git takes care of keeping your development (local) and production (remote) environments synced. Git might be a little confusing if you’re learning it with the mindset that it works like Subversion.

I’ll update this post when the guide is done. For now, the following will assume you’re familiar with Jekyll (or at least have an empty file in the prototype directory) and git. This Bash script simplifies creating the remote git repository:

** please read through the code and make sure you know what this does, and what you’re doing. As of now, this is bias towards my own Apache/vhost setup. It’s trivial to edit for your specific needs. You’re using this at your own risk.

(direct link – repogen.sh)

#!/bin/sh
# 
# 04/01/2009 | http://biodegradablegeek.com | GPL 
# 
# You should be in site (NOT public) root (be in same dir as public/ log/ etc)
# proto/ is created and will house the jekyll prototype
# public/ will be the generated static site
# the public/ folder will be REMOVED and regenerated on every push
# 

if [ -z "$1" ]; then
  echo "Usage: ./repogen.sh domain.comn"
  exit
fi

# optional. will make it easier to copy/paste cmd to clone repo 
SSHURL="ssh.domain.com"
URL="$1"

echo "** creating tmp repo"
mkdir proto
cd proto
git init 
touch INITIAL
git add INITIAL
git commit -a -m "Initial Commit"

echo "** creating bare repo"
cd ..
git clone --bare proto proto.git
mv proto proto.old
git clone proto.git
rm -rf proto.old

echo "** generating hook"
HOOK=proto.git/hooks/post-update

mv $HOOK /tmp
echo '#!/bin/sh' &gt;&gt; $HOOK
echo '# To enable this hook, make this file executable by "chmod +x post-update".' &gt;&gt; $HOOK
echo '#exec git-update-server-info' &gt;&gt; $HOOK
echo '' &gt;&gt; $HOOK
echo '' &gt;&gt; $HOOK
echo 'URL='"$URL" &gt;&gt; $HOOK
echo 'PROTO="/home/$USER/www/$URL/proto"' &gt;&gt; $HOOK
echo 'PUBLIC="/home/$USER/www/$URL/public"' &gt;&gt; $HOOK
echo  '' &gt;&gt; $HOOK
echo 'export GIT_DIR="$PROTO/.git"' &gt;&gt; $HOOK
echo 'pushd $PROTO &gt; /dev/null' &gt;&gt; $HOOK
echo 'git pull' &gt;&gt; $HOOK
echo 'popd &gt; /dev/null' &gt;&gt; $HOOK
echo '' &gt;&gt; $HOOK
echo "echo -----------------------------" &gt;&gt; $HOOK
echo "echo '** Pushing changes to '$URL" &gt;&gt; $HOOK
echo "echo '** Moving current public to /tmp'" &gt;&gt; $HOOK
echo 'mv "$PUBLIC" "/tmp/'$URL'public-`date '+%m%d%Y'`"' &gt;&gt; $HOOK
echo 'echo "** Generating new public"' &gt;&gt; $HOOK
echo 'jekyll "$PROTO" "$PUBLIC"' &gt;&gt; $HOOK

echo "** enabling hook"
chmod a+x $HOOK 

echo "** clone repo on local machina. example:"
echo "git clone ssh://$USER@$SSHURL/~$USER/www/$SSHURL/proto.git"

Usage

Your site structure might be different. repogen.sh is made by pasting the above code in a new file, and then chmod a+x to make it executable. This should be done on the remote server.

cd www/domain.com/
ls
public/ private/ log/ cgi-bin/

./repogen.sh domain.com

Now on your local machine, clone the new repo, move your files in, and push:

git clone ssh://[username]@ssh.domain.com/~[username]/www/domain.com/proto.git
cd proto/
cat "hello, world" &gt; index.htm
git add index.htm
git commit -a -m 'first local commit'
git push

After you push your changes, the post-update hook will delete the public/ directory (the root of the site). This dir and its contents are automatically generated and will get wiped out on EVERY push. Keep this in mind. All your changes and content should reside in proto/.

The proto/ repo will pull in the new changes, and then Jekyll will be invoked to generate the updated site in public/ from the prototype.

Should you need to edit it, the post-update hook is in the bare git repo (proto.git/hooks/)

Thanks to the authors in the posts below for sharing ideas. I first read this git method on dmiessler’s site.

Resources:
dmiessler.com – using git to maintain static pages
toroid.org – using git to manage a web site
Jekyll @ GitHub
git info
more git info

Scraping Google Trends with Mechanize and Hpricot

This is a small Ruby script that fetches the 100 trends of the day for a specific date. If multiple dates are searched, one can find out how many times a keyword occurred between two dates, or just find out what keywords are constantly appearing on the top 100 list. The script is incomplete and one must implement the “implement me!” methods to get full functionality. This, in its current state, should serve as a good starting point for scraping Google Trends.

On a technical note, it’s using mechanize, hpricot, tempfile (for the cache). A lot of this is just copy & paste programming from the earlier anime scraper.

To grab the gems (rdoc takes 10x as long as the gem to fetch and install):

sudo gem install mechanize --no-rdoc
sudo gem install hpricot --no-rdoc
#!/usr/bin/env ruby
# biodegradablegeek.com
# public domain
#

require 'rubygems'
require 'hpricot'
require 'tempfile'
require 'mechanize'
#require 'highline/import'
#HighLine.track_eof = false

$mech = WWW::Mechanize.new
$mech.user_agent_alias = 'Mac Safari'
$master = []

def puts2(txt=''); puts "*** #{txt}"; end

class Cache
  def initialize
    # Setup physical cache location
    @path = 'cache'
    Dir.mkdir @path unless File.exists? @path

    # key/val = url/filename (of fetched data)
    @datafile = "#{@path}/cache.data"
    @cache = load @datafile
  end

  def put key, val
    tf = Tempfile.new('googletrends', @path)
    path = tf.path
    tf.close! # important!

    puts2 "Saving to cache (#{path})"
    open(path, 'w') { |f|
      f.write(val)
      @cache[key] = path
    }

    save @datafile
  end

  def get key
    return nil unless exists?(key) && File.exists?(@cache[key])
    open(@cache[key], 'r') { |f| f.read }
  end

  def files
    @cache.values
  end

  def first
    @cache.first
  end

  def exists? key
    @cache.has_key? key
  end

private
  # Load saved cache
  def load file
    return File.exists?(file) ? YAML.load(open(file).read) : {}
  end

  # Save cache
  def save path
    open(path, 'w') { |f|
      f.write @cache.to_yaml
    }
  end
end

$cache = Cache.new

def fetch(url)
  body = $mech.get(url).body()
  $cache.put(url, body)
  body
end

def getPage(url)
  body = $cache.get(url)

  if body.nil?
    puts "Not cached. Fetching from site..."
    body = fetch url
  end
  body
end

def loadState
  mf = 'cache/master.data'
  $master = File.exists?(mf) ? YAML.load(open(mf).read) : {}
  $master = {} if $master==false
end

def saveState
  open('cache/master.data', 'w+') { |f|
    f.write $master.to_yaml
  }
end

def main
  #loadState

  # Grab top 100 Google Trends (today)
  #date = Time.now.strftime '%Y-%m-%d'
  date = '2009-01-21'

  puts2 "Getting Google's top 100 search trends for #{date}"
  url = "http://www.google.com/trends/hottrends?sa=X&date=#{date}"
  puts2 url

  begin
    body = getPage(url)
  rescue WWW::Mechanize::ResponseCodeError
    puts2 "Couldn't fetch URL. Invalid date..?"
    exit 5
  end

  puts2 "Fetched page (#{body.size} bytes)"

  if body['There is no data on date']
    puts2 'No data available for this date.'
    puts2 'Date might be too old or too early for report, or just invalid'
    exit 3
  end

  doc = Hpricot(body)

  (doc/"td[@class='hotColumn']/table[@class='Z2_list']//tr").each do |tr|
    td = (tr/:td)
    num = td[0].inner_text.sub('.','').strip
    kw = td[1].inner_text
    url = (td[1]/:a).first[:href]
    Keyword.find_or_new(kw) << Occurance.new(num, date, url)
  end
  puts "Got info on #{$master.size} keywords for #{date}"
  puts "keyword '#{$master.first.name}' occured #{$master.first.occurances} times"
end

class Occurance
  attr_accessor :pos, :date, :url
  def initialize(pos, date, url)
    @pos = pos
    @date = date
    @url = url
  end
end

class Keyword
  attr_accessor :name, :occurances
  def initialize(name)
    @name = name
    @occurances = []
    @position_average = nil
    @count = nil
    $master << self
  end

  def self.find_or_new(name)
    x = $master.find { |m| name==m.name }
    x || Keyword.new(name)
  end

  def << occurance
    @occurances << occurance
  end

  def occured_on? datetime
    raise 'implement me'
  end

  def occured_between? datetime
    raise 'implement me'
  end

  def occurances datetime=nil
    raise 'implement me' if datetime
    @occurances.size
  end

  def occurances_between datetime
    raise 'implement me'
  end

  def pos_latest
    @occurances.last.date
  end

  def pos_average
    @position_average
  end

  def pos_average_between datetime
    raise 'implement me'
  end
end

#   Instance= [num, date, url]
#   Keyword=[Instance, Intance, Instance]
#   Methods for keywords:
#   KW.occured_on? date
#   KW.occured_between? d1, d2
#   KW.occurances
#   KW.occurances_between? d1, d2
#   KW.pos_latest
#   KW.pos_average
#   KW.pos_average_between

#   KW has been on the top 100 list KW.occurances.size times
#   The #1 keywords for the month of January: Master.sort_by KW.occurances_between? Jan1,Jan31.pos_average_between Jan1,Jan31
#
#   Top keywords: sort by KW.occurances.size = N keyword was listed the most.
#   Top keywords for date D: Master.sort_by KW.occured_on (x).num

main

AnimeCrazy Scraper Example Using Hpricot & Mechanize

This is a little (as of now incomplete) scraper I wrote to grab all the anime video code off of AnimeCrazy (dot) net. This site doesn’t host any videos on its own server, but just embeds ones that have been uploaded to other sites (Megavideo, YouTube, Vimeo, etc). I don’t know who the original uploaders of the videos are, but I’ve seen this same collection of anime links being used on some other sites. This site has about 10,000 episodes/parts (1 movie may have 6+ parts). The scraper below was only tested with “completed anime shows” and got around 6300 episodes. The remaining content (anime movies and running anime shows) should work as-is, but I personally held off on getting those because I want to examine them closely to try cleaning up the inconsistencies as much as possible.

This scraper needs some initial setup and won’t work out of the box, but I’m including it here in the hopes that it will serve as a decent example of a small real world scraper, if you’re looking to learn the basics of scraping with Hpricot and Mechanize. Let me know if you find any use for it. I will update the posted code later this week when I have time to complete it and add some more features.

There’s one major problem with the organization of episodes on AnimeCrazy, and it’s the fact that some episodes are glued together into one post. Right now the scraper stops and asks you how to proceed when it comes across such a post. You basically need to tell the scraper if a post (page) contains 1 episode (video) or multiple. If there’s 1, it proceeds on its own, but if there’s two, it requires that you give it the names and links of each individual episode (part1 and part2 usually). Sometimes 2 episodes are together in 1 video. Sorta like those music albums on KaZaA or LimeWire that are basically ripped as one huge mp3 instead of individual songs.

This only accounts for maybe 30-40 out of 6000 videos, and it’s not that big of a deal because the amount of work needed to proceed with the scraping is small, but it IS work, and is a bitch slap to the entire concept of automation, but coding around the issue is a major hassle and there would still be a high chance that some inconsistencies will still come through. It would be far less work to just find another anime site which is far more consistent, though the reason animecrazy is good is because it’s active, and the site IS updated manually these days, as far as I can tell.

BTW, Why The Lucky Stiff rocks, and Hpricot is amazing. But the serious scrapologist should consider scrAPI or sCRUBYt (uses Hpricot) for big projects.


#!/usr/bin/env ruby
# License: Public domain. Go sell it to newbs on DigitalPoint.

require 'rubygems'
require 'hpricot'
require 'mechanize'
require 'tempfile'
require 'highline/import'
HighLine.track_eof = false

$mech = WWW::Mechanize.new
$mech.user_agent_alias = 'Mac Safari'

###############################
$skip_until = false
DEBUG=false
###############################

def debug?
  DEBUG
end

def puts2(txt='')
  puts "*** #{txt}"
end

#  Anime has: title, type (series, movie), series
#  Episode has name/#, description, parts (video code)

class Episode
  attr_accessor :name, :src, :desc, :cover
  def initialize(title, page)
    @src = page # parts (megavideo, youtube etc)
    @name = title
    @desc = nil # episode description
    @cover = nil # file path
  end
end

class Anime
  attr_accessor :name, :page, :completed, :anime_type, :episodes
  def initialize(title, page)
    @name = title
    @page = page
    @episodes = []
    @anime_type = 'series'
    @completed = false
  end

  def complete!
    @completed = true
  end

  def episode! episode
    @episodes &lt;&lt; episode
  end
end

class Cache
  def initialize
    # Setup physical cache location
    @path = 'cache'
    Dir.mkdir @path unless File.exists? @path

    # key/val = url/filename (of fetched data)
    @datafile = "#{@path}/cache.data"
    @cache = load @datafile
    #puts @cache.inspect
  end

  def put key, val
    tf = Tempfile.new('animecrazy', @path)
    path = tf.path
    tf.close! # important!

    puts2 "Saving to cache (#{path})"
    open(path, 'w') { |f|
      f.write(val)
      @cache[key] = path
    }

    save @datafile
  end

  def get key
    return nil unless exists?(key) &amp;&amp; File.exists?(@cache[key])
    open(@cache[key], 'r') { |f| f.read }
  end

  def exists? key
    @cache.has_key? key
  end

private
  # Load saved cache
  def load file
    return File.exists?(file) ? YAML.load(open(file).read) : {}
  end

  # Save cache
  def save path
    open(path, 'w') { |f|
      f.write @cache.to_yaml
    }
  end
end

$cache = Cache.new

def fetch(url)
  body = $mech.get(url).body()
  $cache.put(url, body)
  body
end

def getPage(url)
  # First let's see if this is cached already.
  body = $cache.get(url) 

  if body.nil?
    puts "Not cached. Fetching from site..."
    body = fetch url
  end
  body
end

def main
  # Open anime list (anime_list = saved HTML of sidebar from animecrazy.net)
  anime_list = Hpricot(open('anime_list', 'r') { |f| f.read })
  puts2 "Anime list open"

  # Read in the URL to every series
  masterlist = []

  (anime_list/:li/:a).each do |series|
    anime = Anime.new(series.inner_text, series[:href])
    masterlist &lt;&lt; anime
    puts2 "Built structure for #{anime.name}..."
  end

  puts2

  puts2 "Fetched #{masterlist.size} animes. Now fetching episodes..."
  masterlist.each do |anime|
    puts2 "Fetching body (#{anime.name})"
    body = getPage(anime.page)
    puts2 "Snatched that bitch (#{body.size} bytes of Goku Goodness)"
    puts2

    doc = Hpricot(body)
    (doc/"h1/a[@rel='bookmark']").each do |episode|
      name = clean(episode.inner_text)

      if $skip_until
        #$skip_until = !inUrl(episode[:href], 'basilisk-episode-2')
        #$skip_until = nil == name['Tsubasa Chronicles']
        puts2 "Resuming from #{episode[:href]}" if !$skip_until
        next
      end

      # Here it gets tricky. This is a major source of inconsistencies in the site.
      # They group episodes into 1 post sometimes, and the only way to find
      # out from the title of the post is by checking for the following patterns
      # (7 and 8 are example episode #s)
      # X = 7+8, 7 + 8, 7 and 8, 7and8, 7 &amp; 8, 7&amp;8

      # If an episode has no X then it is 1 episode.
      # If it has multiple parts, they are mirrors.
      if single_episode? name
        begin

          puts2 "Adding episode #{name}..."
          ep = Episode.new(name, episode[:href])
          ep.src = getPage(episode[:href])
          anime.episode! ep
        rescue WWW::Mechanize::ResponseCodeError
          puts2 "ERROR: Page not found? Skipping..."
          puts name
          puts2 episode[:href]
        end
      else
        # If an episode DOES have X, it *may* have 2 episodes (but may have mirrors, going up to 4 parts/vids per page).
        # Multiple parts will be the individual episodes in chronological order.
        puts2 "Help me! I'm confused @ '#{name}'"
        puts2 "This post might contain multiple episodes..."

        puts2 "Please visit this URL and verify the following:"
        puts episode[:href]

        if agree("Is this 1 episode? yes/no ")
          begin
            puts2 "Adding episode #{name}..."
            ep = Episode.new(name, episode[:href])
            ep.src = getPage(episode[:href])
            anime.episode! ep
          rescue WWW::Mechanize::ResponseCodeError
            puts2 "ERROR: Page not found? Skipping..."
            puts name
            puts2 episode[:href]
          end
        else
          more = true
          while more
            ename = ask("Enter the name of an episode: ")
            eurl =  ask("Enter the URL of an episode: ")

            begin
              puts2 "Adding episode #{ename}..."
              ep = Episode.new(name, episode[:href])
              ep.src = getPage(episode[:href])
              anime.episode! ep
            rescue WWW::Mechanize::ResponseCodeError
              puts2 "ERROR: Page not found? Skipping..."
              puts name
              puts2 episode[:href]
            end
            more = agree("Add another episode? Y/N")
          end
          puts2 "Added episodes manually... moving on"
        end
      end
    end
    anime.complete!
    # XXX save the entire anime object, instead of just cache
  end
end

def inTitle(document, title)
  return (document/:title).inner_text[title]
end

def inUrl(url, part)
  return url[part]
end

def single_episode?(name)
  !(name =~ /[0-9] ?([+&amp;]|and) ?[0-9]/)
end

def clean(txt)
  # This picks up most of them, but some are missing. Like *Final* and just plain "Final"
  txt[' (Final)']='' if txt[' (Final)']
  txt[' (Final Episode)']='' if txt[' (Final Episode)']
  txt[' (FINAL)']='' if txt[' (FINAL)']
  txt[' (FINAL EPISODE)']='' if txt[' (FINAL EPISODE)']

  txt['(Final)']='' if txt['(Final)']
  txt['(Final Episode)']='' if txt['(Final Episode)']
  txt['(FINAL)']='' if txt[' (FINAL)']
  txt['(FINAL EPISODE)']='' if txt[' (FINAL EPISODE)']

  txt
end

main

If you’re writing your own scraper and would like to use the minimal caching functionality present below, you can gut everything in main() out and put in your own code. Feel free to contact me for assistance.

Here is some sample output:
Continue reading AnimeCrazy Scraper Example Using Hpricot & Mechanize

Quick BASH Script to Dump & Compress a MySQL Database

A quick script I whipped up to dump my MySQL database.
Usage: sh backthatsqlup.sh

(be warned that it dumps ALL databases. This can get huge uncompressed)


#!/bin/sh
# Isam (Biodegradablegeek.com) public domain 12/28/2008
# Basic BASH script to dump and compress a MySQL dump

out=sequel_`date +'%m%d%Y_%M%S'`.sql
dest=/bx/

function e {
  echo -e "n** $1"
}

e "Dumping SQL file ($out). May take awhile..."
#echo "oh snap" &gt; $out
sudo mysqldump -u root -p --all-databases &gt; $out
if [ $? -ne 0 ]; then
  e "MySQL dump failed. Check that server is up and your username/pass"
  exit 7
fi

e "Uncompressed SQL file size"
du -hs $out

e "Compressing SQL file"
gz=$out.tar.gz
tar -zvvcf $gz $out
rt=$?

if [ $rt -ne 0 ]; then
  e "tar failed (error=$rt). Will NOT remove uncompressed SQL file"
else
  e "Removing uncompressed SQL file"
  rm -f $out
  out=$gz

  e "Compressed SQL file size"
  du -hs $out
fi

e "Moving shit to '$dest'"
sudo mv $out $dest

download BackThatSqlUp.sh

Using Javascript to Populate Forms During Development

During development, working with forms quickly gets annoying because you have to constantly fill in each field, sometimes with unique info. One way around this is to write a little Javascript code that just populates the fields. I use something like this on the bottom of the form. I had jQuery no-conflict mode on in this case. In your app you might be able to get away replacing _j() with $():

<% if ENV['RAILS_ENV']=='development' -%>


<% end -%>

Script to Quickly Setup WebApp Environment and Domain

Just sharing a script I wrote to quickly deploy WordPress (and eventually a few other webapps) sites, which somebody might find useful. This uses Linode‘s API* to add the domain name to the DNS server along with some subdomains. If you’re using another server, (Slicehost, your own, etc), you can alter the dns class to use that API, or just ignore the DNS stuff completely; Its optional.

This will be updated periodically as I refactor and add support for more apps (notably Joomla and Clipshare – though this would violate their terms unless you have the unlimited license). This was written primarily because I couldn’t stand setting up another vhost and WordPress installation. There are plenty of existing deployers but I plan on adding very specific features and tweaking this for in-house work. I also wanted to try Rio (Ruby-IO). GPL license. Go nuts.

* As of 10/11, the apicore.rb file on the site has some syntactic errors in the domainResourceSave method. I sent an email out to the author about it. Problems aren’t major. You can get my apicore.rb here.

This won’t run unless you create the appropriate folder structure in /etc/mksite/. I’ll get going on this in a bit. See the code below:

Continue reading Script to Quickly Setup WebApp Environment and Domain

How to POST Form Data Using Ruby

POSTing data on web forms is essential for writing tools and services that interact with resources already available on the web. You can grab information from your Gmail account, add a new thread to a forum from your own app, etc.

The following is a brief example on how this can be done in Ruby using Net::HTTPand this POST form example.

Looking at the source (interlacken.com/webdbdev/ch05/formpost.asp):

<form method="POST" action="formpost.asp">
<p><input type="text" name="box1″ size="20″ value="">
<input type="submit" value="Submit" name="button1″></p>
</form>

We see two attributes are sent to the formpost.asp script when the user hits the submit button: A textbox named box1 and the value of the submit button, named Submit. If this form used a GET method, we would just fetch the URL postfixed with (for example) ?box1=our+text+here. Fortunately, Ruby’s Net::HTTP makes posting data just as easy.

The Ruby code:

#!/usr/bin/ruby

require "uri"
require "net/http"

params = {'box1′ => 'Nothing is less important than which fork you use. Etiquette is the science of living. It embraces everything. It is ethics. It is honor. -Emily Post',
'button1′ => 'Submit'
}
x = Net::HTTP.post_form(URI.parse('http://www.interlacken.com/webdbdev/ch05/formpost.asp'), params)
puts x.body

# Uncomment this if you want output in a file
# File.open('out.htm', 'w') { |f| f.write x.body }

Sending the value of button1 is optional in this case, but sometimes this value is checked in the server side script. One example is when the coder wants to find out if the form has been submitted – as opposed to it being the user’s first visit to the form – without creating a hidden attribute to send along w/ the other form fields. Besides, there’s no harm in sending a few more bytes.

If you’re curious about URI.parse, it simply makes the URI easier to work with by separating and classifying each of its attributes, effectively letting the methods in Net::HTTP do their sole job only, instead of having to analyze and parse the URL. More info on this in the Ruby doc.

Assuming no errors, running this example (ruby postpost or chmod a+x postpost.rb; ./postpost.rb) yields:

<form method="POST" action="formpost.asp">
<p><input type="text" name="box1″ size="20″ value="NOTHING IS LESS
IMPORTANT THAN WHICH FORK YOU USE. ETIQUETTE IS THE
SCIENCE OF LIVING. IT EMBRACES EVERYTHING. IT IS ETHICS.
IT IS HONOR. -EMILY POST">
<input type="submit" value="Submit" name="button1″></p>
</form>

In practice, you might want to use a more specialized library to handle what you’re doing. Be sure to check out Mechanize and Rest-client.