Category Archives: Tips

Bash Script to Force an Empty Git Push

Sometimes, like when you’re testing hooks or trying to create synced remote and local repos, you’ll find yourself touching empty files just to get a git push going. This script automates this task by creating a unique temporary file, committing it, pushing, and then removing the file.

#!/bin/sh
TMP=tmp-`date +'%m%s'`
touch $TMP
git add $TMP
git commit $TMP -m '(forced push)'
git push
git rm $TMP
</pre>
Usage, assuming you named it git-force and made it executable (chmod)
<pre lang="bash">cd git-repo/
./git-force

I place this in ~/bin/ which is in my $PATH. You might want to if you use this a lot.

How to Maintain Static Sites with Git & Jekyll

Static sites in this context just means non-database driven sites. Your static site can be an elaborate PHP script or just a few markup and image files. For this I am using Jekyll – A neat Ruby gem that makes your static sites dynamic. It lets you create layouts and embed custom variables in your HTML (this is a “prototype” of the site).

Jekyll tackles all the nuisances involved in creating static pages (I used to add just enough PHP to make a layout). It works by running your prototype through some parsers and outputs plain static HTML/XML (RSS feeds) etc. It’s perfect for lightweight sites that would be impractical on WordPress, like a few static pages of information, landing pages, portfolio/resume pages, and parked domains.

Git takes care of keeping your development (local) and production (remote) environments synced. Git might be a little confusing if you’re learning it with the mindset that it works like Subversion.

I’ll update this post when the guide is done. For now, the following will assume you’re familiar with Jekyll (or at least have an empty file in the prototype directory) and git. This Bash script simplifies creating the remote git repository:

** please read through the code and make sure you know what this does, and what you’re doing. As of now, this is bias towards my own Apache/vhost setup. It’s trivial to edit for your specific needs. You’re using this at your own risk.

(direct link – repogen.sh)

#!/bin/sh
# 
# 04/01/2009 | http://biodegradablegeek.com | GPL 
# 
# You should be in site (NOT public) root (be in same dir as public/ log/ etc)
# proto/ is created and will house the jekyll prototype
# public/ will be the generated static site
# the public/ folder will be REMOVED and regenerated on every push
# 

if [ -z "$1" ]; then
  echo "Usage: ./repogen.sh domain.comn"
  exit
fi

# optional. will make it easier to copy/paste cmd to clone repo 
SSHURL="ssh.domain.com"
URL="$1"

echo "** creating tmp repo"
mkdir proto
cd proto
git init 
touch INITIAL
git add INITIAL
git commit -a -m "Initial Commit"

echo "** creating bare repo"
cd ..
git clone --bare proto proto.git
mv proto proto.old
git clone proto.git
rm -rf proto.old

echo "** generating hook"
HOOK=proto.git/hooks/post-update

mv $HOOK /tmp
echo '#!/bin/sh' &gt;&gt; $HOOK
echo '# To enable this hook, make this file executable by "chmod +x post-update".' &gt;&gt; $HOOK
echo '#exec git-update-server-info' &gt;&gt; $HOOK
echo '' &gt;&gt; $HOOK
echo '' &gt;&gt; $HOOK
echo 'URL='"$URL" &gt;&gt; $HOOK
echo 'PROTO="/home/$USER/www/$URL/proto"' &gt;&gt; $HOOK
echo 'PUBLIC="/home/$USER/www/$URL/public"' &gt;&gt; $HOOK
echo  '' &gt;&gt; $HOOK
echo 'export GIT_DIR="$PROTO/.git"' &gt;&gt; $HOOK
echo 'pushd $PROTO &gt; /dev/null' &gt;&gt; $HOOK
echo 'git pull' &gt;&gt; $HOOK
echo 'popd &gt; /dev/null' &gt;&gt; $HOOK
echo '' &gt;&gt; $HOOK
echo "echo -----------------------------" &gt;&gt; $HOOK
echo "echo '** Pushing changes to '$URL" &gt;&gt; $HOOK
echo "echo '** Moving current public to /tmp'" &gt;&gt; $HOOK
echo 'mv "$PUBLIC" "/tmp/'$URL'public-`date '+%m%d%Y'`"' &gt;&gt; $HOOK
echo 'echo "** Generating new public"' &gt;&gt; $HOOK
echo 'jekyll "$PROTO" "$PUBLIC"' &gt;&gt; $HOOK

echo "** enabling hook"
chmod a+x $HOOK 

echo "** clone repo on local machina. example:"
echo "git clone ssh://$USER@$SSHURL/~$USER/www/$SSHURL/proto.git"

Usage

Your site structure might be different. repogen.sh is made by pasting the above code in a new file, and then chmod a+x to make it executable. This should be done on the remote server.

cd www/domain.com/
ls
public/ private/ log/ cgi-bin/

./repogen.sh domain.com

Now on your local machine, clone the new repo, move your files in, and push:

git clone ssh://[username]@ssh.domain.com/~[username]/www/domain.com/proto.git
cd proto/
cat "hello, world" &gt; index.htm
git add index.htm
git commit -a -m 'first local commit'
git push

After you push your changes, the post-update hook will delete the public/ directory (the root of the site). This dir and its contents are automatically generated and will get wiped out on EVERY push. Keep this in mind. All your changes and content should reside in proto/.

The proto/ repo will pull in the new changes, and then Jekyll will be invoked to generate the updated site in public/ from the prototype.

Should you need to edit it, the post-update hook is in the bare git repo (proto.git/hooks/)

Thanks to the authors in the posts below for sharing ideas. I first read this git method on dmiessler’s site.

Resources:
dmiessler.com – using git to maintain static pages
toroid.org – using git to manage a web site
Jekyll @ GitHub
git info
more git info

Burning Xbox 360 Games on Linux (Stealth!)

xbox360-bg
You could run ImgBurn in Wine, or probably burn the games in VirtualBox running Windows, but that’s no solution… you’re reading this because you want to burn Xbox 360 games on Linux using native tools. It’s surprisingly easy!

The games are usually an ISO file, along with a little DVD (.dvd) file that tells the burner to use a layer break value of 1913760. This file is not necessary in Linux (or Windows) as we will be telling the app to use that break value explicitly.

I will go into detail on how to setup what you need. If you’re impatient, you might wanna skip the setup and jump straight to the quick recap.

Extract the ISO

cd /games/360/GameX
rar  x kfc-gamex.part01.rar

If you don’t have rar (“winrar”) installed, lookie:

The program 'unrar' can be found in the following packages:
 * unrar-free
 * unrar
Try: sudo apt-get install 

you can also DL it from rarlabs.com.

Now we need to see if the game is stealth/valid. This is done using an app that runs natively on Linux (and OS X) called abgx360.

Install abgx360

abgx360-linux

Download the tar.gz files from http://abgx360.net/download.html. The TUI is nice. Don’t bother getting the GUI for abgx360.

tar -zxvf abgx360-1.0.0.tar.gz
cd abgx360-1.0.0/
./configure && make
sudo checkinstall -D

(You may use ‘make install’ but this is not recommended on Debian/Ubuntu. checkinstall keeps your shit organized.)

If ./configure fails with an error about wx-config/wxWidgets, make sure wxWidgets is installed..

apt-cache search wxgtk2 

and make sure wx-config is in your PATH. On Ubuntu Intrepid, it wasn’t. Find it and make a symlink to something in your path.. i.e.,

locate wx-config # (finds it in /etc/alternatives/wx-config)
sudo ln -s /etc/alternatives/wx-config /usr/bin/wx-config

Rerun ./configure/make/checkinstall

If you downloaded the local database (abgx360-data) from the site above, install it now; Just extract and move the .abgx360/ dir into your ~/

Checking ISO CRC/SS – Is the game stealth?

abgx360 -af3 kfc-gamex.iso

the af3 flag will automagically fix/patch the ISO should it encounter any problems.
What abgx360 will do is check the ISO’s CRC against an online (or offline, ~/.abgx360/) database. It might begin by updating its database. If this is a problem (no net connection), pass it -localonly

When that’s done…

Burning the ISO Using growisofs

Making sure the dual layer DVD is in your drive, run the following command:

# growisofs -use-the-force-luke=dao -use-the-force-luke=break:1913760  -dvd-compat -speed=4 -Z /dev/burner=kfc-gamex.iso

I commented it out so you don’t execute it trying to paste it. Let’s look closer at this command…

The break:1913760 is the layer break, which you’ll find in the .dvd file. If for whatever reason you can’t check the .dvd file, just use this value.

Set your speed to something low. Some say 2.5x but I have no problems burning at 4X (my max is 8X). You don’t need to know the lowest speed your burner can go. Just set it to 2-4 and you’ll be fine.

Set /dev/burner to your own device. It’s probably /dev/scd0, /dev/scd1, or may already have a symlink like /dev/dvd6 /dev/dvd etc..

Try grepping dmesg to find your device. i.e.,

dmesg | grep "LITE"

This might give you some information but probably nothing too helpful:

sudo dvdrecord -scanbus

To see if you have the right device, try ejecting it.

eject /dev/dvd6

Set the kfc-gamex.iso to whatever the name/path of your ISO is (case sensitive of course).

Now I usually begin with a dry run. By passing -dry-run to growisofs, it will proceed as normal but quit before writing anything to disk. Actually, it kind of just spits out a command and dies. Awful design! i.e.,

$ growisofs -dry-run -use-the-force-luke=dao -use-the-force-luke=break:1913760  -dvd-compat -speed=4 -Z /dev/burner=kfc-gamex.iso
Executing 'builtin_dd if=kfc-bh5.iso of=/dev/dvd6 obs=32k seek=0'
$ 

So the above is good. Now remove the -dry-run flag to proceed with the actual burn.

growisofs -use-the-force-luke=dao -use-the-force-luke=break:1913760  -dvd-compat -speed=4 -Z /dev/burner=kfc-gamex.iso

Find something to do, or just stare at the screen. After about 20 minutes (at 4X), you’ll see the burn end successfully with output like this:

 7798128640/7835492352 (99.5%) @3.8x, remaining 0:06 RBU 100.0% UBU  99.8%
 7815495680/7835492352 (99.7%) @3.8x, remaining 0:03 RBU  59.7% UBU  99.8%
 7832862720/7835492352 (100.0%) @3.8x, remaining 0:00 RBU   7.9% UBU  99.8%
builtin_dd: 3825936*2KB out @ average 3.9x1352KBps
/dev/burner: flushing cache
/dev/burner: closing track
/dev/burner: closing disc

You’re done!


Quick Recap


Assuming you installed all the dependencies above, here’s a quick recap of what needs to be done to burn a game.
It really takes about 1 minute to begin the process. Write a shell script if you like.

cd GameX_REGION_FREE_XBOX360_KFC/
rar x kfc-gamex.part01.rar # Extract game ISO 
abgx360 -af3 kfc-gamex.iso # Checks if rip is valid/stealth/ss patched
growisofs -use-the-force-luke=dao -use-the-force-luke=break:1913760  -dvd-compat -speed=4 -Z /dev/burner=kfc-gamex.iso
eject /dev/burner # When burn is done, eject & play. 

Create Unique Email Addresses From One Gmail Account

gmail_logo
Many sites won’t let you use one email address to register multiple accounts. Sometimes you have legitimate reasons to, and other times you just wanna spam build backlinks. Either or, here’s how you can get around this…

Gmail accepts email that has a ‘+’ appended to the username of your email address. Text can be anything alphanumeric. I.e., BigBadPenguin+123@gmail.com

The validation on the site won’t pick this up as a duplicate email address, and messages will still go through to your inbox. If I want 3 accounts on WordPress.com under the same email, for example, I just append +1 and +2:
(You might want your postfix to be more descriptive than this)

  • BigBadPenguin@gmail.com
  • BigBadPenguin+1@gmail.com
  • BigBadPenguin+2@gmail.com

Refactoring Tip: Eliminating Model.find(params[:id]) Duplication

In a controller, you’ll commonly have a method that requires you have an instance variable containing the object you’re working with. An example would be the show, edit, update, and destroy methods (REST).

To eliminate having find(params[:id]) in multiple methods, you can use before_filter, like this:

class Admin::PostsController < Admin::ApplicationController
  before_filter :find_post, :only => [:show, :edit, :update, :destroy]
  rescue_from(ActiveRecord::RecordNotFound) { |e| render :text => "

Post not found

" } def index @posts = Post.find(:all) end def show end def new @post = Post.new end def create @post = Post.new end def edit end def update end def destroy end protected def find_post(id = params[:id]) @post = Post.find(id) end end

(Thanks Jon)

Learning to Read and Grok Other People’s Code

One reason many people don’t contribute to open source apps is because they find it daunting to look through somebody else’s code. Some might even think that it’s just simpler to write something from scratch than to study someone’s work. This isn’t true, and reading foreign code is something get used to and excel at over time. It’s a necessary skill for every programmer, and has many benefits.

A huge benefit is the massive amount of information you learn and get accustomed to in a short period of time. There’s no way to download O’reilly PDFs into your brain just yet, but grokking source code written by those much more experienced than you is one of the fastest ways to see and practice everything you’re been learning in theory (books, sites, classes).

It’s certainly overwhelming to jump head first into a huge app trying to understand every line. I think it’s common for people to open up some code, read it for a few minutes and then never touch it again because they don’t understand it. This was the case with me when I began programming. Here are some ways i used to justify putting off the need to read third party code.

Their code style didn’t suit my taste, i.e., they add the opening curly bracket under the function definition, and I would find myself changing their brackets and formatting more than I spent time actually looking at the logic.

I told myself I would learn much more by re-inventing the wheel, or have more control over my app if I built it from scratch. This is only partial true, but the cons outweight the pros. Reinventing the wheel means diviating from writing program logic and having to learn something that might not even remotely be related to the project I intended to start or finish. Here’s an example that used to be common.

Continue reading Learning to Read and Grok Other People’s Code

Mephisto for the Masses – Installation HOWTO

I’ve recently taken a fancy to Mephisto, a blogging-platform written in Rails. I have nothing against WordPress, but being in Ruby and using Liquid for themes, Mephisto is far easier (and more fun) to tweak and configure, especially when I want to migrate my sites away from the “blog look” and make them more dynamic.

It’s unfortunate development isn’t as active as say, Typo (also a Rails app, but I haven’t tried it), but I find that Mephisto at its current level makes a simple and fast starting point for most of my projects.

The point of this post is to address numerous problems with the installation. These are present in the tarball release of 0.8 Drax, and in trunk (as of 10/21).

Git The Code

Get the files, either the compressed archive or from edge (recommended).

git clone git://github.com/technoweenie/mephisto.git

Pre-installation

You’ll need to freeze rails 2.0.2, and have the latest tzinfo gem installed:

gem install tzinfo 
cd mephisto/ 
rake rails:freeze:edge RELEASE=2.0.2

The file it downloads should be named rails_2.0.2.zip and NOT rails_edge.zip.

Copy the “new” boot.rb into the config/ folder, overwriting the existing one:

cp vendor/rails/railties/environments/boot.rb config/boot.rb

Now rename the database sample file in config/ to database.yml and edit it to fit your own DB settings. You’ll probably only be using production.

Bootstrapping

Now bootstrap:

rake db:bootstrap RAILS_ENV=production

If it works, GREAT. But you’ll probably get an error or two. If you’re getting the following error:

Error message:
  undefined method `initialize_schema_information' for module  
  `ActiveRecord::ConnectionAdapters::SchemaStatements'
Exception class:
  NameError

You forgot to copy over boot.rb from vendor/rails/ – scroll up. If you’re getting an error that redcloth is missing (no such file to load—RedCloth-3.0.4/lib/redcloth), even though it’s in vendor/, it’s because the path to RedCloth is relative in config/environment.rb. Change it from:

require '../vendor/RedCloth-3.0.4/lib/redcloth' unless Object.const_defined?(:RedCloth)

to

require File.join(File.dirname(__FILE__), '../vendor/RedCloth-3.0.4/lib/redcloth') unless Object.const_defined?(:RedCloth)

Running

After the bootstrap, you may either start the server (ruby script/server, thin, mongrel, etc), or go with mod_rails (Phusion Passanger). I recommend the latter – Passenger is amazing, and the error screen is pretty.

Just point your Apache2 vhost to Mephisto’s PUBLIC/ dir. Here’s an example:


   ServerAdmin mrEman@domain.com
   ServerName domain.com
   ServerAlias www.domain.com

   # DocumentRoot must be rails_app/public/
   DocumentRoot /home/kiwi/www/domain.com/public/public
   Railsenv production

   DirectoryIndex index.html index.htm index.php
   ErrorLog /home/blue/www/domain.com/log/error.log
   CustomLog /home/blue/www/domain.com/log/access.log combined

Restart Apache2, and you’re done. The site should work right away. If you get the following error:

No such file or directory - /tmp/mysql.sock

It’s because the socket file resides somewhere else on your (host’s) distro. Just find (man find, locate, etc) and add a symlink to it. Here’s an example (Debian):

ln -s /var/run/mysqld/mysqld.sock mysql.sock

If you’re getting an error that gems you know you have aren’t found, like:

no such file to load -- tzinfo (MissingSourceFile)

it is due to the fact that Gems are not located anywhere Ruby checks. You’ll have to explicity pass Ruby -rubygems or require ‘rubygems’ — what a nuisance. Open config/environment.rb and add the latter line:

# requires vendor-loaded redcloth
require 'rubygems'

This will be global. Now either restart the server you ran (i.e., thin), or tell mod_rails to restart the app. To do so, just create a file named “restart.txt” in the tmp/ folder of the RAILS app:

cd mephisto_root/
touch tmp/restart.txt

and refresh the page. Passenger will restart the app and restart.txt will vanish.

The default login for the /admin page is admin/test. Wasn’t that a blast?

Top 5 Linux Apps That’ll Boost Your Productivity

These are not in any specific order. Also, some might be available on other operating systems.

Tomboy

This is the best note taking app I’ve ever used. It sits in your taskbar, doesn’t annoy you and doesn’t hog your cpu cycles or memory. When you wanna jot down something, hit a global shortcut, type away, and then close. Notes are saved as you type, and it automatically links notes together if you use CamelCase words. It’s written in C#, and still pretty young, but I’ve never had a problem with it in regard to stability or compatibility.

If your distro’s repository doesn’t have a package for the latest version (0.12.0, I highly recommend downloading a newer binary and/or install from trunk/)

Official site: http://www.gnome.org/projects/tomboy/
Subversion: http://svn.gnome.org/viewvc/tomboy/trunk/


Tilda and friends


You know those slide-down consoles in FPS games like Quake, UT, Half-Life, that you invoke by hitting tilde (~), and use to enter your leet r_picmip hacks? Tilda is a Quake style drop-down terminal that gives you the same quick access to your Linux console on any workspace. No more opening a new terminal window for every little task.

Official site: http://sourceforge.net/projects/tilda/

Tilda isn’t the only app of its kind. It’s not even the first. Check out the alternatives as well:
sjterm (“works well with Compiz”): https://gna.org/projects/stjerm/ (alt page)
Yet Another Kuake (Yakuake, for KDE): http://yakuake.uv.ro/
Kuake: http://www.nemohackers.org/kuake.php
Visor (OS X): http://docs.blacktree.com/visor/visor


RescueTime

RescueTime is a little program you download (Mac, Windows, Linux) that sits in the background and checks what windows/apps have focus, and uses this data to compile statistics about your computer habits and productivity. It creates neat graphs and shows how productive you are compared to others within a certain time frame.

The commercial versions have some great team features but the free one is enough to track your own productivity. If you’re paranoid, run it through a proxy or chew some Alprazolam or Zyprexa. It’s worth it.

An app sorta like this was an idea I had but never implemented. It was one of those wake up in the middle of the night with an epiphany, scramble to find a pen and paper to jot it down before it’s gone forever idea, that you then wake up and either find silly or just toss in the idea bin never to be thought of again. The idea stemmed from wanting to create a chart of how I spend my time and compare myself week by week. My proposed implementation was a lot simpler though. I was thinking about having it only track apps that you specify.

This differs from RT which has a gigantic db of categorized apps and lets you choose categories to tag as productive or not (i.e., rhythmbox and mplayer would go under audio/video) I like RT’s implementation.

Official Site: http://rescuetime.com/
Unofficial Linux client (works great): https://launchpad.net/rescuetime-linux-uploader


Screen


Screen is something you find on everybody’s list of Top/Fav Linux apps. If you use the console a lot, especially remotely, screen is a must have.

It keeps a persistent console session open, and lets you attach and detach from it anytime you want, which is great if you get disconnected while working over a network, or when you want to continue what you’re doing at home from work or while on the road. It also has neat features like split screen, tabbed consoles, etc.

When you first run it, you might not notice anything different, but you’re actually in a screen session. Press CTRL+a, followed by ‘?‘ to see a list of shortcuts. Tilda + screen = hacks.

Note: The CTRL+a keystroke is part of many of screen’s shortcuts. Unfortunately, it’s a shortcut in Bash that I frequently use (lets you jump to the beginning of the line), so this is annoying to me. There are ways around this but I’ve just gotten used to the workaround. To jump to the beginning of the line in screen, press CTRL+a, a

Official site: http://www.gnu.org/software/screen/

You might have it installed. If not:

sudo apt-get install screen

Also check out screenie, a wrapper for screen:

sudo apt-get install screenie


Google Calendar Prism


Digital calendars are either too lean (lack features), or are too bloated to keep open. I don’t need the email features that come with some of them, and hate the fact that they’re written in Java.

I tried a number of apps before trying web apps, and now use Google Calendar. It’s secure, fast and you can see your life anywhere. One nice feature is being able to add to or edit the calendar from your PDA or using text messages. I was initially weary of putting my calendar online, but the benefits outweigh the cons (paranoia).

Going back to desktop apps. The only decent one I’ve tried was Rainlendar, but it’s broken on Linux and it’s closed source. Besides, I only liked it because it was simple but synced with Google Cal. At the time, the only alternative I considered was keeping a tab open with Google Calendar, which I wasn’t going to do because Firefox needs to be xkill’d every few days. Then it hit me; Mozilla Prism!

Prism is (basically) a stripped down web browser that is meant to help integrate web apps onto your desktop. prism-google-calendar is a packaged Mozilla Prism setup with Google Calendar out of the box.

It runs independent of your browser and can be treated as a webApp. And since it has its own memory space, it doesn’t go sluggish with Firefox and never needs to be restarted.

I keep it open fullscreen on my second monitor, and can glance at it anytime I feel lost in life.

The only thing missing is a decent alarm feature. Javascript alert()s are shit, and I don’t want annoying emails about my events. I suppose there are hacks around the problem but I learned to glance at the calendar often and don’t need reminders so much anymore.

sudo apt-get install prism-google-calendar



Sharing Files Locally Without a Crossover Cable on OS X

Mac OS X is capable of intelligently detecting whether a cat5 cable is connected to a network device or to another PC. When connected to another PC, it will (digitally) flip the pins to “emulate” a cat5 crossover cable.

Here’s an example on how to share files between a Macbook and another box (XP, Linux etc). All you need is a standard CAT-5 cable.

First, connect the Macbook directly to the machine running XP using the cat5 cable.

Now on the mac, go to System Preferences -> Network, and manually (no dhcp) set the following:
IP address: 192.168.1.1
Gateway: 255.255.255.0
Router: 192.168.1.1

Please note that these settings don’t need to be different if have a router running on that, or a similar IP. I.e., if your Belkin router is at 192.168.2.1, it doesn’t mean you should substitute that for the above settings. Just use the settings provided here as-is before experimenting, or you might run into problems.

Now on the other box (XP, Linux, whatever), set the following manually:
IP address: 192.168.1.2
Gateway: 255.255.255.0
Router: 192.168.1.1

The only difference is the last segment of the IP address. You can make this anything between 2-255.

You’re done. Make sure some files are being shared, and then browse the local network on either box.

Fetching Lots of Small Files from RapidShare? Tip to Save Time

I was staring at the 30+ RapidShare tabs I have open, annoyed that I had to keep waiting for the countdown timer before starting each download. The problem was that I kept forgetting about the countdown and the downloads altogether (30 second timer + Geek-ADD… impossible). It literally took me 10+ hours to get one file just because the session kept expiring.

Some Greasemonkey scripts helped a bit, but I don’t like keeping Greasemonkey enabled just for 1-2 tiny scripts that aren’t that useful anyway, nor restarting Firefox (which I’d need to do to re-enable it). I also don’t really need a solution that’s 100% automatic because I’m usually on the PC when I’m downloading these files, so a little manual work isn’t a problem. A premium account would be fine, but I don’t trust RapidShare with my payment information. I don’t know what other information they store (I emailed them, see reply below) . It’s like giving BTJunkie (good people) your name and address before you’re authorized to download torrents. It’s 100% safe, but just makes me feel uneasy. Even though my downloads are public domain. Warez is BAD NEWS, like Weed or premarital sex.

So I just kept refreshing the site and viewing the source hoping that something useful would magically appear, sort of like when I’m hungry and keep opening the fridge, even though I know there’s nothing interesting in it because I checked it a few minutes earlier.

A lot of older scripts and hacks don’t work because RapidShare now (actually it has been a few years) does most of its auth stuff server-side instead of using Javascript. The following tip is useful in some cases. It isn’t a “hack,” and wouldn’t work when downloading big (10MB+) files. It works great for me because I use RapidShare to download ebooks, scripts, and other not-so-big files (usually 1-10 megs each).

RapidShare displays a countdown timer with a duration that depends on the size of the file. Files about 500KB or less have no countdown, while files up to ~30-40 megs have a 30 second countdown. Bigger files have a 50-60 second countdown. I’m not sure of the exact numbers, but you get the idea. After the countdown is done, the page reloads with a unique URL to download the file. This URL expires after some time, but …

Countdown Can Be Started on Multiple Files Simultaneously

When the countdown is active on a file, you can click “Free user” on other RS links you have open, and the link to download each of the other files (the big DOWNLOAD icon) remains active for awhile. If you can finish any downloads before this time expires, you can begin the other downloads using this link, without having to go through the countdown again.

So this basically saves having to wait the 30 seconds. That’s it. You can’t download more than one file simultaneously when you aren’t registered, but by having all the download links ready to go, you can begin each download as soon as the prior download has finished.

I’m not sure when the download link expires, but the time seems to have increased to at least a few minutes. Unfortunately, download speed is capped at around 70-80 KB/s on free accounts, but in my experience this still works great (I DL at the max speed).

Note, clicking “download” when another file isn’t complete will give you a warning that your IP is already downloading another file, and you must now refresh – meaning you must wait for the countdown again.

Usual Scenario

Pamela has 9 tabs open - Normally, she would have to click “Free user,” wait for the countdown timer, and then DL… and when done, click the next tab and repeat the process.

But now, she can get some of that work (waiting) out of the way – She clicks “Free user” on the first file. 30.. 29.. 28.., … while she waits for that to reach zero, she goes through every other tab, hitting “Free user.”

Now ALL her RS tabs’ countdown timers are going down, and when finished, they will each redirect to the page featuring the download button. After the first download is done – Pamela just goes to the next tab and clicks “Download” and the next download instantly starts. No waiting.

And so on. Again, this wouldn’t work with big files (or anything on a slow connection) because by the time you have finished downloading one item, the other RapidShare download sessions will have expired.

You can squeeze more time out of the session by waiting till the first countdown is almost done before activating the rest. This can give you a 20-25 second headstart. If the session(s) do expire, you can just repeat the process, preferably starting with smaller files first. Also, this might differ depending on time of the day, as RapidShare’s limits are changing throughout the day. I.e., rush hour, happy hour, etc.

I don’t use RapidShare.de, just .com. If you’re tried this on .de, be sure to report your results.