Friday, July 11, 2008

Bos taurus gas quarantined!

The gas produced by the cows' (Geek name: B. taurus) digestive system (aka GIT, short for gastrointestinal tract) is being tested, in order to see the share of effect to the global warming problem.
See it here.

What's next? Attaching tubes on human rear ends? :)

P.P.S. I'm establishing a miniposts tag, for such small posts!

Bigger is better.. computer-wise!

Burj Dubai
Bigger is better when it comes to bits, and a really small difference to the bits addresses can make an enormous difference, although there might be some physical restrictions in order to realise some dreamy performances and capacities.

After reading this post, a very, very informative post about getting your 32-bit Ubuntu to work with more than 4GiB memory, I got a question born.. so how is all this limited? I soon dug a bit deeper about 32-bit and 64-bit differences (reading freak that I am) and got to this wikipedia article.

Logically back in the 1980's, when the first personal computers were being developed, the processors were 16-bit and 24-bit. Their memory limitation is easy to be calculated: 216 (or 2^16, a short form without the use of superscript), likewise for 24-bit processors - they could support up to 2^24 bytes of RAM memory, so we have up to 64KiB (kibibytes) and 16MiB (mebibytes) RAM limitation respectively. Then came the 32-bit processors which could hold up to 4GiB of memory addresses.

In the meantime, Intel developed an extension called "Physical Address Extension", which in short terms it allowed 32-bit processors to use up to 36 bits, i.e. 2^36 = 64GiB (Gibibytes). A very vast improvement!

With the appearance of 64-bit processors it is possible to virtually use a really huge amount of RAM of 2^64 = 16 EiB (exbibytes or "exabytes"), but nowadays the problem is physical, as we can't produce such big amounts of memory - yet! Maybe in the near future, who knows. All I know is that I won't be the first person to test this exa-beast :)

To sum up, bytes do matter - vast colossal difference for just a few more bytes!

Note: During the writing of this post, I've learnt about the difference of Gigabyte (1000^3) and Gibibyte (1024^3), so I decided to respect the 2,4% difference and the correct use of GiB and KiB respectively using the IEC standard instead of SI :)

Friday, July 04, 2008

How to run simple local php script files without apache

Some said it's not possible. I've busted my head trying to figure out an easy way to test simple php/html scripts, which would run locally using only php, but wouldn't require running an apache server locally.




Well using two packages (which can be stripped to only one) I managed to create a bash script that will take the php script file and transform it to html (which is saved temporarily in the current directory), thus being able to run it easily in firefox. Here we go!

Firstly, install the packages, in Ubuntu:

sudo apt-get install php5-cli debianutils

From these packages we'll use the php command (php5-cli) and the tempfile command (debianutils)



Next we create the bash "phpview" convert script. Fire up the terminal (Applications > Accessories > Terminal):

gedit $HOME/phpview.sh

Inside place the following code:

#!/bin/bash
file=`tempfile -d . -s .html`
echo "Creating file: $file"
php5 $1 > $file
firefox $file
echo "Press any key to delete the temporary html file or ctrl-c to stop this script and keep it"
read
echo "Removing $file"
rm $file



Save and close the gedit text editor and make phpview.sh executable:

chmod +x $HOME/phpview.sh



That's about it, we can create a test php/html file:

gedit $HOME/Desktop/hi.php

Place in the following:

#!/usr/bin/php5 
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" dir="ltr" lang="en">
<head><title>Moo</title></head>
<body><?php echo "Hello world!"; ?></body>
</html>



Save and close.

Now let's test it!!

cd $HOME/Desktop
~/phpview.sh hi.php



Woo!

Monday, June 30, 2008

jDownloader for one-click download filehosting services

I've been looking for an open-source solution for ages for one-click host services, such as Rapidshare, Megaupload, Upload.to... the list is more or less endless. A downloader of this type is usually useful if you don't want to pay all those services monthly for a single download.

I have postponed this for some time, then I finally stumbled upon jDownloader!







 



It's java-based so you will probably need java:

sudo apt-get install sun-java6-jre sun-java6-bin sun-java6-fonts



It features many useful stuff, one them is the cute interface (GTK-like), also many themes to appeal your eye. Moreover, you can use the Captcha Exchange Service with this nifty downloader. A built-in default CAPTCHA service is the jAC (java anti-captcha), which starts to automatically "learn" captchas and can make typing CAPTCHAs a breeze (Of course you have to type the first say 20-30 captchas).

Another great service is their addons, directly or downloadable through rapidshare, easily enabled and disabled through configuration (Don't forget to check "Expert mode" for more features!).




The downside is that it's sometimes buggy, still needs to be worked with. It also needs is a good language translator, since some important messages are still left in German, and... "Kein sprechenzi Deutche"!



But being limited, this downloader is the best so far, gets an overall score: 8/10 stars! :)

Sunday, June 29, 2008

How to convert raw to png image files

For most of you this is solved using RawStudio:
sudo apt-get install rawstudio

You run it from Applications > Graphics > Rawstudio, open your directory with the raw files, select them then go to the menu Batch > Add to batch queue
Then click on the "Batch" tab (on the right of the program). Choose the settings you want (no need to change the filename though).

There was a fellow at ubuntuforums with a Fuji camera that couldn't convert the raw (.RAF) images.
I provided the following solution:
sudo apt-get install ufraw
ufraw-batch --help


ufraw can be used for multiple batch processing of raw images.

outputdir="./converted/"
outputsize="640x480"
thumbnailsize="100x75"

[ -d $outputdir ] || mkdir $outputdir
list=`ls -1 | grep -i "\.raf$"`
for i in $list; do
fname=`basename $i | sed -e 's/\.raf$//i'`
outputfile="$outputdir$fname.png"
thumbfile="$outputdir$fname.thumb.png"
ufraw-batch --overwrite --silent --out-type=png16 --size=$outputsize --output=$outputfile "$i"
ufraw-batch --overwrite --silent --out-type=png16 --size=$thumbnailsize --output=$thumbfile "$i"
echo "Done $i"
done