Linux (/ˈlɪnəks/ LIN-uks or, less frequently used, /ˈlaɪnəks/ LYN-uks) is a Unix-like and mostly POSIX-compliant computer operating system assembled under the model of free and open-source software development and distribution. [read more at]

  • debian_logo subversion_logo

    Subversion (SVN) is an open source version control system. It allows users to keep track of changes made over time to any type of electronic data. Typical uses are versioning source code, web pages or design documents.

    Installing the latest Subversion (svn) version 1.6.6 is a bit more difficult than just running apt-get install subversion, as the latest stable version in Debian Lenny is considered to be SVN 1.5.1 not Subversion 1.6.6

    Edit the file /etc/apt/sources.list and Add the line below to

    # deb lenny-backports main contrib non-free

    Add the public key from lenny-backports by running the following command

    # wget -O - | apt-key add -

    Now update the latest package definitions

    # apt-get update

    # apt-get -t lenny-backports install subversion

    Check that you now have the correct version by running

    # svn --version
    svn, version 1.6.6 (r40053)
       compiled Nov 23 2009, 16:16:41

    Copyright (C) 2000-2009 CollabNet.
    Subversion is open source software, see
    This product includes software developed by CollabNet (http://www.Collab.Net/).

    The following repository access (RA) modules are available:

    * ra_neon : Module for accessing a repository via WebDAV protocol using Neon.
      - handles 'http' scheme
      - handles 'https' scheme
    * ra_svn : Module for accessing a repository using the svn network protocol.
      - with Cyrus SASL authentication
      - handles 'svn' scheme
    * ra_local : Module for accessing a repository on local disk.
      - handles 'file' scheme
    * ra_serf : Module for accessing a repository via WebDAV protocol using serf.
      - handles 'http' scheme
      - handles 'https' scheme

    I recommend you to always use the latest version (but hey backup/dump your repository before), you’ll see later that with Apache  Maven, it is also resolving some issues.

  • joomla_cms


    This small plugin add automatically to any articles a set of social icons that let your reader increase your social ranking. It support

  • tatice-linux-tux-10409

    Some useful Bash Linux alias taken from my user profile. If you have a long command that you type frequently consider putting it in as an alias.

    In computing, alias is a command in various command line interpreters (shells) such as Unix shells, 4DOS/4NT and Windows PowerShell, which enables a replacement of a word with another string. It is mainly used for abbreviating a system command, or for adding default arguments to a regularly used command. [WikiPedia]

    Description &160;
    Find all directories and and chmod them to rwxr.xr.x alias fixpermD='find . -type d -exec chmod 755 {} \;'
    Find all files and and chmod them to rw.r..r.. alias fixpermF='find . -type f -exec chmod 644 {} \;'
    Both above and set recursively user and user group in one shot alias fixUserAPerms='fixpermF; fixpermD; chown -R userA .;chgrp -R usergrp .'
    Make a directory and all files recursively read only, secure but a pain to maintain. see next&160; alias ro='find . -type f -exec chmod 444 {} \;find . -type d -exec chmod 555 {} \;'
    Make a directory and all files recursively read write, just the time to update your site. alias rw='find . -type f -exec chmod 644 {} \;find . -type d -exec chmod 755 {} \;'
    Lower case all files in current directory alias lowercaseallfiles='for f in *; do mv $f `echo $f | tr [:upper:] [:lower:]`; done'
    List all open connections to your server alias listOpenConnections='lsof –i'
    List all internet connections alias listinternetconnection='netstat –lptu'
    find the 10 biggest in size directories alias dirsizes=’du -cks * | sort -n | tail –10'
    Show open port alias openports='netstat -nape --inet'
  • I am still testing my NAS system (seven 300Gb disks) and while testing OpenSolaris (under Belenix), and Googling I found that page:

    This blog is about the Google Summer of Code project "ZFS filesystem for FUSE/Linux"

    For all of You that do not know what FUSE is, FUSE is the Filesystem in Userspace Linux kernel module. This module allows nonprivileged users to create their own filesystems without writing any kernel code.

    While ZFS has many features which can benefit all kinds of users - from the simple end-user to the biggest enterprise systems:
    • Provable integrity - it checksums all data (and meta-data), which makes it possible to detect hardware errors (hard disk corruption, flaky IDE cables..). 
    • Atomic updates - means that the on-disk state is consistent at all times, there's no need to perform a lengthy filesystem check after forced reboots/power failures.
    • Instantaneous snapshots and clones - it makes it possible to have hourly, daily and weekly backups efficiently, as well as experiment with new system configurations without any risks.
    • Built-in compression, encryption
    • Highly scalable
    • Pooled storage model - creating filesystems is as easy as creating a new directory. You can efficiently have thousands of filesystems, each with it's own quotas and reservations, and different properties (compression algorithm, checksum algorithm, etc..).
    • Built-in stripes (RAID-0), mirrors (RAID-1) and RAID-Z (it's like software RAID-5, but more efficient due to ZFS's copy-on-write transactional model). 
    • Variable sector sizes, adaptive endianness etc...
    In fact this is a sponsored Google summer of code project. Note that Apple is also currently porting ZFS under OS-X. That could mean that ZFS could be mainstream in a future not far away than 2 years.
    And I expect to test RAID-Z...For those interested by RAID-Z raw performances, You can read this highly technical blog entry: WHEN TO (AND NOT TO) USE RAID-Z

    Sun expect to have a stable ZFS version by June 2006.
  • Disabling root login will force any attempted hackers to use 2 passwords instead of only one. Making it more difficult for a hacker to break into your server.

    You must have another user which is NOT root already on the box

    vi /etc/ssh/sshd_config

    Search for line

    PermitRootLogin yes

    and change it to

    PermitRootLogin no

    restart sshd by typing
    /etc/init.d/sshd restart
  • Additionally, we made the following concessions to tip the scales in Microsoft's favour.
    1. We didn't modify the model to reflect research by the Robert Frances Group which shows that Linux needed 82% fewer staff-resources.
    2. We have not included the costs of malware; viruses, spyware, worms, keyloggers, adware etc. Every research point we have found suggests that this cost is essentially and predominantly a Windows platform cost, resulting in billions lost by business every year.
    3. We have also not included the substantial costs which arise when systems need to be pre-emptively rebooted or worse, crash, resulting in unscheduled downtime. All our research indicates that Linux rarely if ever suffers such problems and open source platforms on the whole are extremely robust.
    4. Finally, because Microsoft has claimed that introducing Linux into an environment will lead to increased reliance on external consultants, we have tripled the amount budgeted for such requirements on the Linux models.more HERE (PDF 2Mb)

  • git-stitch-repo

    Stitch several git repositories (merging git repository) into a git fast-import stream from Git-FastExport


    $ perl -MCPAN -e shell
    cpan[6]> i /fastexport/
    	Distribution    BOOK/Git-FastExport-0.107.tar.gz
    	Module  < Git::FastExport        (BOOK/Git-FastExport-0.107.tar.gz)
    	Module  < Git::FastExport::Block (BOOK/Git-FastExport-0.107.tar.gz)
    	Module  < Git::FastExport::Stitch (BOOK/Git-FastExport-0.107.tar.gz)
    	4 items found
    cpan[6]> install BOOK/Git-FastExport-0.107.tar.gz
    cpan[6]> CTRL-D


    git-stitch-repo will process the output of git fast-export --all --date-order on the git repositories given on the command-line, and create a stream suitable for git fast-import that will create a new repository containing all the commits in a new commit tree that respects the history of all the source repositories. Typical usage is like this:
    git clone
    git clone
    $ ls
    A B
    mkdir result
    cd result
    git init
    git-stitch-repo ../A:folderA ../B:folderB | git fast-import
    # pull both repository in a new branch for examples
    git checkout -b newBranch
    git pull . master-A
    git pull . master-B
    # when finished delete unused branches
    git branch -d master-A 
    git branch -d master-B 

  • SUSE Linux 10.1 is available for download at the following mirrors.

    This release is SuSE 10.1 and not different from what you will later be able to buy in a store. The boxed retail comes with 5 CDs for x86 and one DVD9 for x86 and x86_64 and it also includes non-OSS software. It also comes with a nice, thick printed manual and 30 days of support. You won't get that with the free download.
    The free download comes on 5CDs without non-OSS software and one add-on CD with this non-OSS software. Later (hopefully not much later) there will be two DVD5 releases, one for x86 and one for x86_64 that are almost identical with the retail DVD9 and that include OSS and non-OSS software.


    Product Highlights
    XGL on SUSE 10.1

    For those who prefer the DVD version, You ll have to buy the box version or use a small LINUX utility called 'makeSuseDVD' out. It will take the 5 CD ISOs and build a single DVD iso.

  •  After lot of work, we proudly announce the availability of openSUSE 10.2 formerly know as SUSE Linux 10.x
    It's available for download on in x86, x86-64, and ppc versions -
    via ftp from our mirrors and bittorrent.
    openSUSE 10.2 will be offered as boxed product in Europe through retail as
    usual. In North America the box will be available through Due to production lead time first boxes will show up on shelves and onlineshops mid of December.Anounce HERE

    I recommend You to use OpenSuSE Torrent files and a bittorrent client like Azureus.

    Here are also some links to reviews:
    Screenshots and Desktop Linux and ahacking guide to add non-free apps on 10.2.

  •   SUSE Linux 10.1 Codename "Agama Lizard" RC1 has been released yesterday at According to the planning, the RC2 will follow next week and the final on 25 April 2006. I am already preparing my 2 computer for the update :-)




  • The Alternative PHP Cache (APC) is a free and open opcode cache for PHP. It was conceived of to provide a free, open, and robust framework for caching and optimizing PHP intermediate code. from

    Unpack your distribution file. You will have downloaded a file named something like apc_x.y.tar.gz. Unzip this file with a command like
    # wget
    # gunzip apc_x.y.tar.gz

    Next you have to untar it with
    #  tar xvf apc_x.y.tar

    This will create an apc_x.y directory. cd into this new directory:
    # cd apc_x.y

    phpize is a script that should have been installed with PHP, and is normally located in /usr/local/php/binassuming you installed PHP in /usr/local/php. (If you do not have the phpize script, you must reinstall PHP and be sure not to disable PEAR.).

    Locate phpize:
    # find / -name phpize

    mine is in
    yours may be in /usr/local/php/bin/phpize

    Run the phpize command:
    #  /usr/local/php/bin/phpize

    Its output should resemble this:
            Configuring for:
              PHP Api Version:   20020918
              Zend Module Api No:   20020429
              Zend Extension Api No:   20021010

    phpize should create a configure script in the current directory. If you get errors instead, you might be missing some required development tools, such as autoconf or libtool. You can try downloading the latest versions of those tools and running phpize again.

    Run the configure script.

    phpize creates a configure script. The only option you need to specify is the location of your php-config script.

    find location of php-config
    # find / -name php-config
    # ./configure --enable-apc --enable-apc-mmap --with-apxs --with-php-config=/etc/alternatives/php-config

    php-config should be located in the same directory as phpize.
    If you prefer to use mmap instead of the default IPC shared memory support,  add --enable-apc-mmap to your configure line.

        If you prefer to use sysv IPC semaphores over the safer fcntl() locks, add --enable-sem to your configure line.  If you don't have a problem
        with your server segaulting, or any other unnatural accumulation of semaphores on your system, the semaphore based locking is slightly faster.

    Compile and install the files.
    Simply type:
    # make install
    Installing shared extensions:     /usr/lib/php5/extensions/

    Suggested Configuration (in your php.ini file)

    Although the default APC settings are fine for many installations, serious
    users should consider tuning the following parameters:

        OPTION                  DESCRIPTION
        ------------------      --------------------------------------------------
        apc.enabled             This can be set to 0 to disable APC. This is
                                primarily useful when APC is statically compiled
                                into PHP, since there is no other way to disable
                                it (when compiled as a DSO, the zend_extension
                                line can just be commented-out).
                                (Default: 1)

        apc.shm_segments        The number of shared memory segments to allocate
                                for the compiler cache. If APC is running out of
                                shared memory but you have already set
                                apc.shm_size as high as your system allows, you
                                can try raising this value.  Setting this to a
                                value other than 1 has no effect in mmap mode
                                since mmap'ed shm segments don't have size limits.
                                (Default: 1)

        apc.shm_size            The size of each shared memory segment in MB.
                                By default, some systems (including most BSD
                                variants) have very low limits on the size of a
                                shared memory segment.
                                (Default: 30)

        apc.optimization        This option has been deprecated.
                                (Default: 0)

        apc.num_files_hint      A "hint" about the number of distinct source files
                                that will be included or requested on your web
                                server. Set to zero or omit if you're not sure;
                                this setting is mainly useful for sites that have
                                many thousands of source files.
                                (Default: 1000)

        apc.user_entries_hint   Just like num_files_hint, a "hint" about the number
                                of distinct user cache variables to store.
                                Set to zero or omit if you're not sure;
                                (Default: 4096)

        apc.ttl                 The number of seconds a cache entry is allowed to
                                idle in a slot in case this cache entry slot is
                                needed by another entry.  Leaving this at zero
                                means that your cache could potentially fill up
                                with stale entries while newer entries won't be
                                (Default: 0)

        apc.user_ttl            The number of seconds a user cache entry is allowed
                                to idle in a slot in case this cache entry slot is
                                needed by another entry.  Leaving this at zero
                                means that your cache could potentially fill up
                                with stale entries while newer entries won't be
                                (Default: 0)

        apc.gc_ttl              The number of seconds that a cache entry may
                                remain on the garbage-collection list. This value
                                provides a failsafe in the event that a server
                                process dies while executing a cached source file;
                                if that source file is modified, the memory
                                allocated for the old version will not be
                                reclaimed until this TTL reached. Set to zero to
                                disable this feature.
                                (Default: 3600)

     apc.cache_by_default    On by default, but can be set to off and used in
                                conjunction with positive apc.filters so that files
                                are only cached if matched by a positive filter.
                                (Default: On)

        apc.filters             A comma-separated list of POSIX extended regular
                                expressions. If any pattern matches the source
                                filename, the file will not be cached. Note that
                                the filename used for matching is the one passed
                                to include/require, not the absolute path.  If the
                                first character of the expression is a + then the
                                expression will be additive in the sense that any
                                files matched by the expression will be cached, and
                                if the first character is a - then anything matched
                                will not be cached.  The - case is the default, so
                                it can be left off.
                                (Default: "")

        apc.mmap_file_mask      If compiled with MMAP support by using --enable-mmap
                                this is the mktemp-style file_mask to pass to the
                                mmap module for determing whether your mmap'ed memory
                                region is going to be file-backed or shared memory
                                backed.  For straight file-backed mmap, set it to
                                something like /tmp/apc.XXXXXX (exactly 6 X's).
                                To use POSIX-style shm_open/mmap put a ".shm"
                                somewhere in your mask.  eg.  "/apc.shm.XXXXXX"
                                You can also set it to "/dev/zero" to use your
                                kernel's /dev/zero interface to anonymous mmap'ed
                                memory.  Leaving it undefined will force an
                                anonymous mmap.
                                (Default: "")

        apc.slam_defense        ** DEPRECATED - Use apc.write_lock instead **
                                On very busy servers whenever you start the server or
                                modify files you can create a race of many processes
                                all trying to cache the same file at the same time.
                                This option sets the percentage of processes that will
                                skip trying to cache an uncached file.  Or think of it
                                as the probability of a single process to skip caching.
                                For example, setting this to 75 would mean that there is
                                a 75% chance that the process will not cache an uncached
                                file.  So the higher the setting the greater the defense
                                against cache slams.  Setting this to 0 disables this
                                (Default: 0)

                                When you modify a file on a live web server you really
                                should do so in an atomic manner.  That is, write to a
                                temporary file and rename (mv) the file into its permanent
                                position when it is ready.  Many text editors, cp, tar and
                                other such programs don't do this.  This means that there
                                is a chance that a file is accessed (and cached) while it
                                is still being written to.  This file_update_protection
                                setting puts a delay on caching brand new files.  The
                                default is 2 seconds which means that if the modification
                                timestamp (mtime) on a file shows that it is less than 2
                                seconds old when it is accessed, it will not be cached.
                                The unfortunate person who accessed this half-written file
                                will still see weirdness, but at least it won't persist.
                                If you are certain you always atomically update your files
                                by using something like rsync which does this correctly, you
                                can turn this protection off by setting it to 0.  If you
                                have a system that is flooded with io causing some update
                                procedure to take longer than 2 seconds, you may want to
                                increase this a bit.
                                (Default: 2)

        apc.enable_cli          Mostly for testing and debugging.  Setting this enables APC
                                for the CLI version of PHP.  Normally you wouldn't want to
                                create, populate and tear down the APC cache on every CLI
                                request, but for various test scenarios it is handy to be
                                able to enable APC for the CLI version of APC easily.
                                (Default: 0)

        apc.max_file_size       Prevents large files from being cached.
                                (Default: 1M)

        apc.stat                Whether to stat the main script file and the fullpath
                                includes.  If you turn this off you will need to restart

     apc.write_lock          On busy servers when you first start up the server, or when
                                many files are modified, you can end up with all your processes
                                trying to compile and cache the same files.  With write_lock
                                enabled, only one process at a time will try to compile an
                                uncached script while the other processes will run uncached
                                instead of sitting around waiting on a lock.
                                (Default: 1)

        apc.report_autofilter   Logs any scripts that were automatically excluded from being
                                cached due to early/late binding issues.
                                (Default: 0)

        apc.rfc1867             RFC1867 File Upload Progress hook handler is only available
                                if you compiled APC against PHP 5.2.0 or later.  When enabled
                                any file uploads which includes a field called
                                APC_UPLOAD_PROGRESS before the file field in an upload form
                                will cause APC to automatically create an upload_<key>
                                user cache entry where <key> is the value of the
                                APC_UPLOAD_PROGRESS form entry.

                                Note that the file upload tracking is not threadsafe at this
                                point, so new uploads that happen while a previous one is
                                still going will disable the tracking for the previous.
                                (Default: 0)

        apc.localcache          This enables a lock-free local process shadow-cache which
                                reduces lock contention when the cache is being written to.
                                (Default: 0)

        apc.localcache.size     The size of the local process shadow-cache, should be set to
                                a sufficently large value, approximately half of num_files_hint.
                                (Default: 512)

                                Optimize include_once and require_once calls and avoid the
                                expensive system calls used.
                                (Default: 0)


  • BlackDog is a fully self-contained computer with a built-in biometric reader and a host of other powerful features. Unlike any other computing device, BlackDog is completely powered off of the USB port of your host computer – no external power adapter required!
    • Dimensions: H: .50â€Â? W: 1.75â€Â? L: 3.5â€Â?
    • Weight: 1.6 ounces
    • 400Mhz PowerPC Processor
    • 64MB RAM
    • 256MB (199$) or 512MB (239$) Flash Memory
    • USB 2.0
    • Biometric Scanner
    • MMC Expansion Slot

  • The Penguin Replies And How!
    In response to the recent Rob Enderele interview, Con Zymaris CEO of Cybersource Pty. Ltd. clears the air on the much-debated, highly publicised 'Microsoft vs Linux' war in an exclusive with  

    Note: Do not forget to read the comments as well, some are really good.
  •  MadPenguin has put an online review of the Pepper Pad, a project similar to the Microsoft Origami solution.

    Once again, Tux has beat the Borg to the punch with a new innovation. This time, it's thePepper Pad  an ultra-mobile computer designed primarily for video playback and Internet access under circumstances where most notebook computers would be too heavy, and most PDAs would have too small a viewing screen. If you have seen the Origami media blitz (who hasn't!!) and you want a truly open ultra-mobile computer NOW and don't want to wait for Origami, you probably will like the Linux-powered Pepper Pad.Read more HERE 

  • A good paper on migrating to linux on the desktop using Wine where it is needed...

     With the increasing interest in the value that Linux can bring to the enterprise, companies need to assess likely migration strategies. Technologies that allow the reuse of Windows applications, and Wine in particular, are a key component to such migrations. This whitepaper examines the requirements for enterprise migration on the desktop. It examines a full range of available tactics, including Wine, and then suggests strategies for making the journey in ways that are pragmatic, economical, and customer focused.More at Dekstop Linux

  • KLIK is something so simple that I do not understand why nobody has think about it before....

    You have on disk a file .cmg which contain the application and all shared object (it is call .dll under windows or .so under linux or config files) in a compressed virtual file system and IS runnable like a normal application!

    What are the advantages (from Here)?

    • I want to have multiple versions of the same application on the same machine. Every version of a software usually has its own strengths and bugs, and I want to use them in parallel.
    • I work with multiple OSes. When I have installed an application, I want to use it on Knoppix, Kanotix, Linspire, Mepis, Ubuntu... whatever. From the same location, without having to install it in every OS again and again. 
    • If a friend wants an application, I just want to send him one file that just plain works and that doesn't need him to fiddle around with things. 
    • I want control over what goes where. I never liked folders like "My Documents" that try to dictate where you place your stuff. After all, the computer should do what I want and not the other way around. For example, my computer has multiple hard disks and if one of them gets full I just place my applications on the other one. 
    • I want to be able to carry around my applications with me, e.g. on a USB stick or CD-ROM. I want a base system that shows no "traces" of my usage. Normally, operating systems tend to become "messy" and slower over time because failed (un)installations leave back "mess" on the system. This is why I want applications to be strictly separated from the OS. 
    • There are package repositories for individual distributions, but usually they never have the latest bleeding-edge software that I want. Therefore I want applications to be independent from particular package management systems. They should simply run on any system. 
    • I don't want to compile stuff. Compiling is for developers, but I am a user. Imagine an application that takes 10 minutes to compile. If only 100,000 users compile it, that makes one million minutes, or almost two years. Wasted time and energy!
    • I add to the list: you can always delete a software packages without any problems for other software, remove the cmg from disk and you're done!

    I agree that some problems are coming from linux itself: 120 linux versions, not all applications are compiled for Your distribution.... Linux system are heavily using frameworks (more than 4000 packages or libray) and very dependant on code version of others frameworks.

    Personaly I see only one drawback with KLIK: size consumed on disk, but since I have 1.2TB and since it reduce dependency hell among shared objects, I've tried it, 2 hours after I have still a big smile on my face :-)

    KLIK is still experimental but has work flawlessy under OpenSUSE 10.0

  • A great article!

    The story of the Linksys Wireless-G Router (model WRT54G) and how you can turn a $60 router into a $600 router is a little bit CSI and a little bit Freaks & Geeks. It’s alsothe story of how the open source movement can produce a win-win scenario for both consumers and commercial vendors.
    In June 2003 some folks on the Linux Kernel Mailing List sniffed around the WRT54G and found that its firmware was based on Linux components. Because Linux is released under the GNU General Public License, or GPL, the terms of the license obliged Linksys to make available the source code to the WRT54G firmware

    So the Linksys WRT54G can be loaded with replacement firmware with exciting new features. Which raises the question – like what?
    read more HERE at

  • tux-droid-linux-companionTux Droid is a Linux wireless Tux mascot (210mm x 180mm x 140mm - with lowered wings) with a programmable interface, allowing it to announce events by its gestures and by ALSA driven sound. The events are detected by specific gadgets, which are handled by the Tux Gadget Manager.

    The Tux Droid supports Linux kernel 2.4 or later and needs a 800 MHz CPU and 128 MB RAM. It communicates by infrared to the USB port (1.1 or 2.0) and for media detection it needs an internet connection. The mascot is driven by Atmel AVR RISC microcontrollers.

    The mascot comes with a microphone and an infrared receiver, to perform a 2.4 GHz wireless full-duplex digital link between the USB dongle.

    The Tux Droid also has a light sensor and a push button on top of the head. Its gestures cover e.g. eye- and wing-movements, while switch sensors in both wings are triggered by pushing the wings. For its sound output there is a volume control wheel to control a speaker and a 3.5mm stereo audio socket for audio out.




    My tux Droid has arrived 2 days ago.

    Shortly here is a list of what I do not like

    • Noisy gearbox, compare to the Nabaztag, there is world in between.
    • 2.4GHZ but no WIFI, so it need always a running server. Hope the would develop a WIFI
    • Use Acapela voice engine, which should be the best on market, but voices are not really as clear as on the Nabaztag.
    • Less gadget, more in the 20 range.

    And what I like a lot

    • Open source hardware and software,
    • Many programming language: Python, Java,
    • Easy to program gadget,
    • A lot more response feedback: yes, mouth, flaps, rotation.
    • Very good wiki, and online documentation
    • It look like TUX :-)

    I did develop a Tux Droid plugin for TeamCity which is not far away from running and be distributed under GPL v3

    Download the latest software go to the kysoh website.
    Developers documentation visit our wiki.
    For the forum go here.
    For the trackers go here.
    And for the community website go here.

  • Issue number 13, May 2006, of TUX now is available.  To download the current issue,subscribe for FREE today.  If you already have subscribed, clickhere or on the Download TUX button on the right to download the current issue.

    You can also download previous issues here

  • Watch Linux Ubuntu 6.10 with its compositing manager (XGL) on a Gnome desktop.

    Download latest version of Flash to view video! .

    Click Here to View in Full Screen Mode

    And what does that cost? $0
  • Official version of nginx for Ubuntu Precise is 1.1.19 but the latest available stable version is 1.2.2 (Changes), In this post I will present you how to update to the latest available version.

    vi /etc/apt/sources.list

    and add depending on your Ubuntu version either

    For Ubuntu 10.04 Lucid:

    deb lucid nginx
    deb-src lucid nginx

    For Ubuntu 12.04 Precise:

    deb precise nginx
    deb-src precise nginx

    Now you can run

    apt-get update

    When using the public nginx repository for Ubuntu, you’ll get this error

    W: GPG error: lucid Release: The following signatures 
    couldn't be verified because the public key is not available: NO_PUBKEY ABF5BD827BD9BF62

    First of all this is only warning and you can ignore it, if you know what are you doing and in case you prefer to add public key, used for signing packages and repository, just run:

    gpg -a --export 7BD9BF62 |  sudo apt-key add -


    cat nginx_signing.key | sudo apt-key add -

    apt-get update should now run fine, however after running an

    apt-get install nginx

    you may still get this kind of error:

    dpkg: error processing /var/cache/apt/archives/nginx_1.2.2-1~precise_amd64.deb (--unpack):
     trying to overwrite '/etc/logrotate.d/nginx', which is also in package nginx-common 1.1.19-1
    dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)
    Errors were encountered while processing:

    just remove nginx-common and retry

    apt-get remove nginx-common

    More at


    The PDF Edition of Ubuntu Pocket Guide and Reference is available entirely free of charge. It is practically identical to the Print Edition. You can download it by clicking the links below. Over 250,000 people already have!

    You can share the PDF file with people you know, and even upload it to file sharing networks. You may NOT sell the PDF! Click here for more information, or check out the FAQs.

    Download this book or buy it at

  • Here is how to update in Ubuntu Oneiric 11.10&160; to the latest development version of nginx (1.1.13). The latest stable version being the 1.0.11

    add-apt-repository ppa:chris-lea/nginx-devel
    apt-get update
    apt-get upgrade

  • who | grep -i brunette| date;
    cd ~; unzip; touch; head ; strip; top;
    finger; mount; gasp < yes & yes; fsck;
    more; yes; uptime; umount; sleep

    Yeah simply born to be root...
    linux graffiti