storage

  • NAS are very popular these days; for people who can not afford a professional NAS or a sub 1000€ system, the idea of recycling an old pc come quickly to mind. What is a little bit more difficult is to find the right hardware and software combination.
    I higly recommend Promise SATA raid controller for their Linux native support, while for software I came across OpenFiler; an open source project which has professional functionnalities...

    Openfiler is a powerful, intuitive browser-based network storage software distribution. Openfiler delivers file-based Network Attached Storage and block-based Storage Area Networking in a single framework. Openfiler sits atop of CentOS Linux (which is derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor). It is distributed as a stand-alone Linux distribution.

    The entire software stack interfaces with third-party software that is all open source. File-based networking protocols supported by Openfiler include: NFS, SMB/CIFS, HTTP/WebDAV and FTP. Network directories supported by Openfiler include NIS, LDAP (with support for SMB/CIFS encrypted passwords), Active Directory (in native and mixed modes) and Hesiod. Authentication protocols include Kerberos 5. Openfiler includes support for volume-based partitioning, iSCSI (initiator), scheduled snapshots, resource quota, and a single unified interface for share management which makes allocating shares for various network file-system protocols a breeze.

    Note that I am currently thinking on buying a ReadyNAS NVbecuse of its size
  •  I will be building in the next few days my own Network Array Storage (NAS) monster:

    • RAID 6 , instead of RAID5, so 2 disks can fail in the array. A 7 Hot spare is planned
    • Hardware RAID because the cheap motherboard (NVIDIA) only support in best case software RAID5
    • GIGABIT network...
    • Linux powered of course: FreeBSD for security or OpenFiler (RedHat) for it's ease of use...
    • Crypto loop (Linux kernel 2.6 dmcrypt), private keys on USB stick, only 20GB will NOT be encrypted
    • FileSystem: XFS or ReiserFS because I will be storing big files

    The hardware will look as follow:

    • 6 Maxtor 7L300R0 MaXLine III, 7200rpm, 16MB, 300GB, IDE, 24/7 server : 60 Months garanty! 104€/each = 624€
    • AMD Athlon 64 3000+  BOX, Socket 939, Venice, the least expensive Athlon 64     99€
    • Asus A8N-VM CSM, mATX, Nvidia 6150/430 Video, socket 939, SATA RAID, because I need PCI-E for the hardware RAID  70€
    • Promise SuperTrak EX8350, SATA2, 8 SATA port, Raid6 Controller, because Promise support Linux   297€
    • Thermaltake Armor (bought previously)
    • Zalman CNPS9500 LED, Socket 754/939/940/478/LGA775  53€

    I will explain later why I did not bought a SOHO NAS, but briefly because:

    • For the price of my complete system, I have an empty SOHO NAS box or with 250GB (Raid0)
    • I have 8 + 6 = 14 SATA ports!
    • SOHO NAS are only RAID5 in best case
    • Bad performances of SOHO NAS
    More pictures, howto, and benchmarks soon...
  • synology-ds408 How to mount your Synology NAS (or any other NAS brand) shared folder under Linux using CIFS.

    CIFS stands for "Common Internet File System," also known under the older name SMB (Server Message Block),  which is a network protocol used by Windows clients for issuing file access requests to Windows servers

    Open a terminal and as root, create as many directory as needed in /mnt/


    # sudo mkdir /mnt/video
    # sudo mkdir /mnt/music

    Unfortunately there is no frontend or editor user interface for maintaining  the entries of /etc/fstab, you’ll have to use your favorite text editor to add the following entries (one shared folder = one line)

    # vi /etc/fstab

    linux.synology.mount.disk.fastb

    //ipadress/shareName  /mnt/directory cifs iocharset=utf8,user=synologyUser,password=synologyUserPassword,rw,
    uid=linuxLogonUser,gid=linuxLogonUserGroup 0 0

    For example, in OpenSuSE 11.3

    //nas/video /mnt/video cifs iocharset=utf8,
    user=admin,password=admin,rw,uid=cedric,gid=users 0 0

    Start also Dolphin (Dolphin is the default KDE 4 file manager), or your preferred file explorer, and navigate to /mnt.

    You can now drag each directory to the left bar (“Places”) for quicker access

     linux.synology.mount.disk.dolphin  

    Or you can drag them on the desktop

    linux.synology.mount.disk.plasma

    Choose either Folder view to view content in real time in a plasma widget, or as icon. You can see the result below:

    linux.synology.mount.disk.desktop

    You’re done, I did also remove Kaffeine, install VLC and I am enjoying Streaming from the Synology NAS

    NOTE:

    I did also add this how to to the official Synology WIKI page http://forum.synology.com/wiki/index.php/Mapping_a_Network_Drive

  • I am still waiting on the 4 last hard disks, they should arrive next week. While mounting everything together in the case, I was thinking on the range of tests I may do with this NAS before putting it online in production.

    What kind of operating system will I use? 
    OSSoftware
    RAID 5/6
    Hardware
    RAID 5/6
    Remarks
    Windows XPNeed Windows Server BUT there is a workaroundthrough the Promise EX350 driverI do not want a fully fledged OS for a file server, but want to look at performances
    Linux OpenFilerstandard using "mdam"through the Promise EX350 Linux driverelegant, free and OS footprint can be reduce at it's minimum
    OpenSolarisusing ZFS Raid-Zno driver supportZFS is a great file system, and RAID-Z solve some problem of software RAID5 and Hardware RAID5 at the same time!
    others?
    feel free to submit an alternative
    Contact me or use
    Comments

    click read more...


    And this is how my Network Array Storage looks like:
    NAS server
    Processor(s)AMD Athlon 64 3000+  BOX, Socket 939, Venice
    PlatformAsus A8N-VM CSM, mATX, Nvidia 6150/430 Video, socket 939, SATA RAID
    BIOS xxxxx
    RAMCorsair CM72DD512AR-400 (DDR2-400 ECC, reg.)
    2x 512 MB, CL3-3-3-10 Timings
    System Hard DriveRaid 6:
    2 Maxtor 7L300R0 MaXLine III, 7200rpm, 16MB, 300GB, IDE, 24/7 server.
    4  Western Digital Caviar RE, 7200rpm, 8MB, 320GB, SATA, 24/7


    USB attached:
    1 Maxtor onetouch USB2/Firewire 300GB
    1 Maxtor onetouch2
    USB2/Firewire300GB
    Mass Storage Controller(s)Promise SuperTrak EX8350, SATA2, 8 SATA port, Raid6 Controller
    Networking
    Graphics CardOn-Board Graphics
    NVIDIA GeForce 6150

    What kind of  performances test I will do?
    Sofware
    Performance-testc't h2benchw 3.6
    IoZone
    I/O PerformanceIOMeter
    Fileserver-Benchmark
    Webserver-Benchmark
    Database-Benchmark
    Workstation-Benchmark

    Future client using the NAS fileserver
    Clients
    Ladtop
    Windows XP professional
    HP nx7000
    100Mb NIC
    Pentium-M 1.6GHz
    1500MB Ram
    60GB Harddisk
    15,4""WXGA 
    Home desktop,
    Linux SuSe 10.1
    1000 Mb NIC
    AMD Athlon XP 3400+
    1500 Mb RAM
    Asus A78Nx Nforce2 mainboard

    Of course I will stress the box 1 week before putting any vital/useless data on it!
  • origin: WikiPedia

    SAMBA
    An open source implementation of the SMB file sharing protocol that provides file and print services to SMB/CIFS clients. Samba allows a non-Windows server to communicate with the same networking protocol as the Windows products. Samba was originally developed for Unix but can now run on Linux, FreeBSD and other Unix variants. It is freely available under the GNU General Public License. The name Samba is a variant of SMB, the protocol from which it stems. As of version 3, samba not only provides file and print services for various Microsoft Windows clients but can also integrate with a Windows Server domain, either as a Primary Domain Controller (PDC) or as a Domain Member. It can also be part of an Active Directory domain.

    CIFS
    Server message block (SMB) is a network protocol mainly applied to share files, printers, serial ports, and miscellaneous communications between nodes on a network. It is mainly used by Microsoft Windows equipped computers.

    FTP
    The File Transfer Protocol (FTP) is a software standard for transferring computer files between machines with widely different operating systems. It belongs to the application layer of the Internet protocol suite.
    NFS
    Network File System (NFS) is a protocol originally developed by Sun Microsystems in 1984 and defined in RFCs 1094, 1813, (3010) and 3530, as a file system which allows a computer to access files over a network as easily as if they were on its local disks.

    RSYNC
    rsync is a computer program which synchronises files and directories from one location to another while minimizing data transfer using delta encoding when appropriate. An important feature of rsync not found in most similar programs/protocols is that the mirroring takes place with only one transmission in each direction.


  • Penguin computing power!

    Here we goes, I've receive yesterday all missing hardware to finish the building of my own NAS.

    A NAS (or Network Attached Storage) is a hard disk storage device that is set up with its own network address rather than being attached directly to the computer that is serving applications or files to a network's users. By using a NAS, both applications and files can be served faster because they are not competing for the same processor resources. The NAS is attached to a local area network (typically, an Ethernet network) and assigned an IP address....

    Here is some pictures...Nothing really special, if You already know how to build a computer by Your own...


    The 4 Western Digital Hard disks RAID optimized because they have been made to have a time-limited error recovery which improves compatibility with RAID adapters, and prevents drive fallout caused by the extended hard drive error-recovery processes common to desktop hard drives.

    "Desktop drives are designed to protect and recover data, at times pausing for as much as a few minutes to make sure that data is recovered. Inside a RAID system, where the RAID controller handles error recovery, the drive needn't pause for extended periods to recover data. In fact, heroic error recovery attempts can cause a RAID system to drop a drive out of the array. WD RE is engineered to prevent hard drive error recovery fallout by limiting the drive's error recovery time. With error recovery factory set to seven seconds, the drive has time to attempt a recovery, allow the RAID controller to log the error, and still stay online." from  Western Digital

    These drives along with the Maxtor Maxline III  have 60 months guaranty: highly recommended!

    Ask the shop to provide You disks not from the same batch, to reduce statically disk fallout.
    In order to replace a faulty RAID disk as fast as possible, it is not a bad idea to put a number on them. Normally hard disks in a NAS are into zero force and hot swap bays. But they cost at least 250$ for 4 drives...
    First batch of 3 disks (3 x 320GB Western Digital). Using a Thermaltake Armor tower help a lot in my setting.
    Same remark, it is obvious but cables can also fail, and it is not recommended to pull out the wrong cable if the array is online.
    Second batch of 3 disks, the number 6 will be mounted later.
    Thermaltake provide in the front bay a really good cooling fan with an integrated blue led.
    The Asus A8N-VM mainboard  mini ATX, Nvidia 6150/430 Video, socket 939, SATA RAID 0,1,5

    Ohhh no, cables are starting to pop out the case. Do not expect to see  a Mackintosh ordered internal case in the next few pictures...
    Routing cables, the case has a lot of possibilities to hide them.

    The AMD Athlon 64 3000+, Socket 939, Venice core, is cooled down by a Zallmann  CNPS9500
    Bringing power to all disks.
    Front of the case, the mainboard along with the additional hardware RAID card (Promise SuperTrak EX8350, SATA2) has more than 14 SATA ports...plenty of extensions possibilities with a case of...20 bays.
    The front cooler will be able to suck air freely.
    Power ON!

    The system is making a lot of noise (not only coming from FANs), I reduce the Zallmann noise (horrible at full speed) with the included speed controller.
    The 7 hard disks (one in hot swap) are making the case wobbling.
    Detail on the Zalmann CNPS9500 LED

  • Status: in development
    Developers: 5

    homepagewww.freenas.org 
     version0.66
    Based onFreeBSD 6
    SupportCIFS (samba), FTP, NFS, RSYNC
    Software Raid0,1,5 
    Hardware Raidyes if supported by FreeBSD 6
    InterfaceWeb interface, PHP scripts
    Size16MB
    Can be installedCompact Flash, hard drive or USB key
    FilesystemUFS, FAT32, EXT2/EXT3, NTFS (limited read-only)
    HardDriveATA/SATA, SCSI, USB and Firewire
    NetworkAll supported cards by FreeBSD 6 (including wireless card!)

    Added value
    Test it without breaking your NAS server with the VMWARE image:
    FreeNAS is installed on the first hard drive (2 partitions), with a RAID 5 volume for the 3 others hard drive. The IP address configured is 192.168.1.1, with default login/password.

    Why choosing itWhy avoiding it
    Small, do not need an additional disk for the OSFuture releases?
    16MB
    FreeBSD secure out the box: the least number of buffer vulnerabilities since years!
    Very nice GUI


    Performances Tests

    in progress

  • RAID @ home raid5Presentation

    Openfiler
    is a powerful, intuitive browser-based network storage software distribution. Openfiler delivers file-based Network Attached Storage and block-based Storage Area Networking in a single framework.

    Openfiler sits atop of CentOS Linux (which is derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor). It is distributed as a stand-alone Linux distribution. The entire software stack interfaces with third-party software that is all open source.

    Status: stable, in development
    # Developers: __


    homepage www.openfiler.com
     version 1.1.1 stable (2005)
    2.0beta (2006)
    Based on CentOS Linux
    Support
    • NFS,
    • SMB/CIFS,
    • HTTP/WebDAV
    • FTP
    • more
    Network directories support
    • NIS, LDAP (with support for SMB/CIFS encrypted passwords)
    • Active Directory
    • Hesiod
    • Kerberos 5.
    • more
    Software Raid 0,1,5,6
    Hardware Raid yes if supported by CentOS
    Interface Web interface, PHP scripts
    Size 4GB
    Can be installed On hard disk only because of its size
    File system UFS, FAT32, EXT2/EXT3, NTFS (limited read-only)
    HardDrive ATA/SATA, SCSI, USB and Firewire
    Network All supported by CentOS (including wireless card!)

    RAID @ home raid5  Installation

    Is straightforward, You only have to follow the flow on screen. But here is a small HowTo:

    RAID @ home raid5  HowTo: software RAID 5 install

    What I want: software RAID 5, 4 disk of 320GB (real 305GB), using NVIDIA SATA chipset (not a dedicated RAID5 board).

    Note: these pictures are not screenshots but picture taken with a sony camera...
    http://www.openfiler.com/download
    Download iso image from sourceforge and burn it to a CD Insert CD, and boot the PC
    The first step is to TEST the quality of the medium. Openfiler did not recognize the NVIDIA controller.
    choose "Add device"
    NVIDIA drivers (both Ethernet and drive controller) are at the end of the list.
    I've add both drivers manually. Then "Done" The welcome page. click "Next"
    Choose Keyboard language. I am not a novice, so let's look at the advanced configuration.
    The 4 disks are recognized. First I add some space for the Operating System. If You do not want to have a fifth disk just for the operating system, You'll have to reserve a small amount of the global space for the Openfiler system. Note: this space will be located on the first disk and wont be in the raid array...so no redundancy.
    Anyway it is uncommon to install the RAID engine on the RAID array itself.
    Lets have 2GB for System.
    And  1 GB for SWAP Then I click on th button RAID, since I have no RAID predefined only the first choice is available: "Create a RAID partition"
    I will have to create a RAID partition for each of the 4 drives, I reserved 300GB for disk SDB 300GB for disk SDC, for Disk SDA
    and for disk SDD...till all 4 disks contains a RAID partition. I click on the RAID button for the 5th times: and choose "create a RAID device"
    My disk array will be named /RAID (mount point), Raid level 5 Result, a /RAID (device /dev/md0) with an EXT3 file system.
    Nothing particular, default values are good Language support: English
    Choose TimeZone Enter a good Root password. Mine s too small but it is only a prototype for determining performances and reliability of the setup.
    Confirm all values entered by clicking Next wait till the raid array initialization

    CD get ejected, and reboot. Point your browser to https://box_ip:446/
    And administer remotely the box

    If all my explanations are not clear enough, or You want more details, visit the official installation page

    RAID @ home raid5  HowTo: hardware RAID 5 install

    in progress...

    RAID @ home raid5  Administration

    Check Openfiler Administration guide
    point your browser to https://box_ip:446/

    RAID @ home raid5  Problems encountered

    OpenFiler 1.1
    2.0beta1
    2.0beta2
    • The SATA controller was not recognized, this force me to use the 2.0Beta.
    • Unable to read or mount manually 2 different USB keys (FAT32), also unable to read CDROM (closed ISO and CDRW)
    • The network card (NFORCE 4 ) was not recognized by Openfiler 2.0Beta, I fail to copy the NVIDIA driver on the box because of point 2.
    • Is working perfectly, did not ask for any supplemental drivers

    RAID @ home raid5  Web Interface GUI

    Screenshots

    RAID @ home raid5  Performances Tests

    in progress


    RAID @ home raid5  Conclusions

    Why choosing it Why avoiding it
    Enterprise NAS features out of the box You do not need enterprise NAS features
    Very nice WEB GUI 4GB is too much and need an additional small disk only for starting the OS
    A lot of functionalities
    Limited choice of file system:
    • no Reiserfs, the swiss knife of all filesystem.
    • no JFS, XFS more adapted for big files
    A big communities of users and developers, good online documentation.
    No AMD64 version, but it's really not an issue.
    Very easy to have a software RAID5 arrays setup working.
    Stable, Linux 2.6.9 kernel base.
    GPL but an Enterprise version (with support) is also available.


     
  • in construction
  • in construction
  • Putting OpenSolaris in a NAS server

    OpenSolaris is an open source project created by Sun Microsystems to build a developer community around the Solaris Operating System technology
    OpenSolaris express is the official distribution and can be download HERE but I will use a fork of that code.  Raid @ home with opensolaris and ZFS Why Solaris for a NAS server?

    Solaris itself while being a rock solid operating system, is not really needed for a NAS server (oversized). What has increase my interest in it, is ZFS, the Zetabyte File System. This is an extract of opensolaris.org all arguments fits nicely to my need:

    <quote>

    • ZFS is a new kind of filesystem that provides simple administration, transactional semantics, end-to-end data integrity, and immense scalability. ZFS is not an incremental improvement to existing technology; it is a fundamentally new approach to data management. We've blown away 20 years of obsolete assumptions, eliminated complexity at the source, and created a storage system that's actually a pleasure to use.
    • ZFS presents a pooled storage model that completely eliminates the concept of volumes and the associated problems of partitions, provisioning, wasted bandwidth and stranded storage. Thousands of filesystems can draw from a common storage pool, each one consuming only as much space as it actually needs. The combined I/O bandwidth of all devices in the pool is available to all filesystems at all times.
    • All operations are copy-on-write transactions, so the on-disk state is always valid. There is no need to fsck(1M) a ZFS filesystem, ever. Every block is checksummed to prevent silent data corruption, and the data is self-healing in replicated (mirrored or RAID) configurations. If one copy is damaged, ZFS will detect it and use another copy to repair it.
    • ZFS introduces a new data replication model called RAID-Z. It is similar to RAID-5 but uses variable stripe width to eliminate the RAID-5 write hole (stripe corruption due to loss of power between data and parity updates). All RAID-Z writes are full-stripe writes. There's no read-modify-write tax, no write hole, and — the best part — no need for NVRAM in hardware. ZFS loves cheap disks.
    • But cheap disks can fail, so ZFS provides disk scrubbing. Like ECC memory scrubbing, the idea is to read all data to detect latent errors while they're still correctable. A scrub traverses the entire storage pool to read every copy of every block, validate it against its 256-bit checksum, and repair it if necessary. All this happens while the storage pool is live and in use.
    • ZFS has a pipelined I/O engine, similar in concept to CPU pipelines. The pipeline operates on I/O dependency graphs and provides scoreboarding, priority, deadline scheduling, out-of-order issue and I/O aggregation. I/O loads thatbring other filesystems to their knees are handled with ease by the ZFS I/O pipeline.
    • ZFS provides unlimited constant-time snapshots and clones. A snapshot is a read-only point-in-time copy of a filesystem, while a clone is a writable copy of a snapshot. Clones provide an extremely space-efficient way to store many copies of mostly-shared data such as workspaces, software installations, and diskless clients.
    • ZFS backup and restore are powered by snapshots. Any snapshot can generate a full backup, and any pair of snapshots can generate an incremental backup. Incremental backups are so efficient that they can be used for remote replication — e.g. to transmit an incremental update every 10 seconds.
    • There are no arbitrary limits in ZFS. You can have as many files as you want; full 64-bit file offsets; unlimited links, directory entries, snapshots, and so on.
    • ZFS provides built-in compression. In addition to reducing space usage by 2-3x, compression also reduces the amount of I/O by 2-3x. For this reason, enabling compression actually makes some workloads go faster.
    • In addition to filesystems, ZFS storage pools can provide volumes for applications that need raw-device semantics. ZFS volumes can be used as swap devices, for example. And if you enable compression on a swap volume, you now have compressed virtual memory.
    • ZFS administration is both simple and powerful.

    </quote>

    This speak by itself, Ive seen 2 Demos HERE, and while the hardware support is not that great, I've decide to give it a try.  Note that linux may have a port of  ZFS port before July 2006, as it is a sponsored Google summer of code project.


    Raid @ home with opensolaris and ZFS Which Solaris flavor

    In Fact it is possible to use one of the following OpenSolaris distribution:
    • BeleniX is a *NIX distribution that is built using the OpenSolaris source base. It is currently a LiveCD distribution but is intended to grow into a complete distro that can be installed to hard disk. BeleniX has been developed out of Bangalore the silicon capital of India and it was born at the India Engineering Center of SUN Microsystems. And... it USE KDE: the est open source desktop.
    • SchilliX, a live CD.s
    • marTux, a live CD/DVD, for Sparc
    • Nexenta, a Debian-based distribution combining GNU software and Solaris' SunOS kernel
    • Polaris, a PowerPC port

    Status: stable, in development
    # Developers: __

    homepage Belenix logo
    http://belenix.sarovar.org
     version 0.4.3a
    Based on OpenSolaris
    Support
    • NFS,
    • SMB/CIFS,
    • HTTP/WebDAV
    • FTP
    Network directories support
    • ???
    Software Raid 0,1,5,6
    Hardware Raid
    Interface None
    • Remote login is deactivated but can be re-enable: You need to comment out the line "CONSOLE=/dev/console" in the file /etc/default/login to allow remote root login.
    • maybe VNC remote access.
    Size ??
    Can be installed
    • Live CD -> but mount point has to e recreated
    On hard disk only because of its size
    File system EXT2/EXT3, ZFS
    HardDrive ATA/SATA, SCSI, USB and Firewire
    Network not well...

    RAID @ home raid5  Installation

    Since belenix is a Live CD, and just for playing around with ZFS, it is more than enough.

    Raid @ home with opensolaris and ZFS Playing with ZFS



    Raid @ home with opensolaris and ZFS Future






    Raid @ home with opensolaris and ZFS Links and ressources


     
  •  I've start looking at RAID 5 NAS array (a way to ensure redundancy of data and allow a set of files to be accessible across a network of machine) system 2 months ago, reading a lot of articles at the best hardware reviewers:

    A lot of new products have appeared in the last months, sign of a consumer demand. I have a lot of possibilities, each with their strengths and weakness:

    Infrant Ready NAS NV     Intel SS4000EThecus 4100

    1. Build my own small system, ($300 without disks), an ASUS Nforce mainboard (Gigabit, PCI-E, Video), an Athlon X64 3200MHz, but the CPU alone consume 90Watts (less in economic mode) and it is difficult to find a power supply under 200Watts. I already have a box (A mini ThermalTake tower for sure too big in the living room)
    2. Buy a Infrant  ReadyNAS NV ($900 without disks), because it has a great community (Forums), is small, look nice, consume only 50Watts. But I am concerned by performances problems (not consistent, good in read). Attention it is by far the faster SOHO NAS on the market as it outperform Buffalo Terrastation, Synologic base NAS by figures.  [AnandTech]
    3. Buy a Intel SS4000E ($850 without disks), mainly because its small, run dedicated XOR engine at 400MHz  vs only 200MHz for Infrant NV, but it also consume a lot more: 200Watts, and it hasn't been reviewed till now. Intel technical sheet also state that the CPU can reach 600MHz.
    4. Buy a dedicated RAID 5 hardware card, there is a lot available, but their prices are ridiculous for a personal use, more than 400 euro and for a little more it is today possible to build a top system based on a NFORCE4, Athlon XP64, Memory. Linux driver support is not bad (Promise, Escalade) but their drivers are not open source. This option fell down, as I do not have a PCI-Express port on my A7N8X NFORCE2, and may want to get rid of that big tower soon.
    I also want to have a LINUX powered NAS, because I feel more confident with Linux file system, where filename case is relevant, kernel can get stripped down to what it really need, and do not require a costly license (Windows XP or Embeded 2003 are out). I found a lot of  open source and free RAID operating system:  OpenFiler, FreeNAS, NasLite for naming a few.

    I came also across some very good resources, one for example listing the SATA chipsets which are recognized under Linux which is a must read before buying any mainboard or controller. And then get shocked by this performance RAID roundup: Hardware Vs Software RAID, where the Linux kernel is a clearly winner.

    Basically, the number of choice are now limited:
    • Wait for the Intel SS4000E review, or hope for a faster ReadyNas from Infrant.
    • Keep my biggest tower (huge Thermaltake Armor) and run on a new mainboard (Time to get rid of my 2001 mainboard NFORCE2? ) a software RAID array.
    I expect to build a Linux NAS Raid 5 array  made of  4  Maxtor 7L300S0 MaxLine III, 7200rpm, 16MB, 300GB, SATA, 24/7, 1M MTBF(5 years garanty) as I already have 2 of them and found them reliable, for a total of  3/4 * 1200GB = 900GB of raw data, and hook  to it 2 external USB disks (OneTouch 250GB and OneTouch2 250GB).

    Links and resources
  • seagate.momentus.xt.500gb

    The Seagate® Momentus® XT drive enables laptop PC users to enjoy solid state-like performance without sacrificing storage capacity and affordability. The Momentus XT solid state hybrid drive utilizes Adaptive Memory™ technology to dynamically optimize performance by aligning to user needs. This perfect balance of SSD and HDD delivers low heat, noise and vibration, and is available in capacities up to 500GB.

    Here is the drive I am testing,  information you can easily found with drivedetect.exe (http://support.seagate.com/kbimg/utils/drivedetect.exe)

    Model: ST95005620AS, Serial: 5YX03VW9, Firmware: SD23    

    this is by the way the latest firmware as for today 28.10.2010

     

    Due to the nature of the SSD that is monitoring your usage, you wont get the maximum of performances at first run. Here is 3 run using CrystalDiskMark 3.0 x64 (C) 2007-2010 hiyohiyo (Crystal Dew World : http://crystalmark.info/)

    seagate.momentus.xt.500gb.crystaldiskmark01 seagate.momentus.xt.500gb.crystaldiskmark02seagate.momentus.xt.500gb.crystaldiskmark03 

    In order to try to try to reveal the real performance of the disk without the SSD, I di made a run with bigger chunk of data 4GB, the size of the internal cache, good but not stellar.

    seagate.momentus.xt.500gb.crystaldiskmark04

    As a reference here is my old drive, a Western Digital blue WD3200BJKT from August 2008. Again not a lot of differences.western.digital.WD3200BJKT.crystaldiskmark western.digital.WD3200BJKT

    And the Intel SSD flagship SSDSA2M160G2GC (160GB) . This Intel X25-M SSD 160GBwas paid 400€, so 3.5 times more expensive (400€/114€) for 2.3 (240.1/104) more performances:

    intel.ssd.160gb.sa2m160G2GCintel.x25m.G2.ssd.160gb

    Conclusions

    Seagate Momentus XT Intel X25-M SSD 160GB

    Sequential Read
    Sequential Write
    Random Read 512KB
    Random Write 512KB   
    Random Read 4KB (QD=1) 
    Random Write 4KB (QD=1)  
    Random Read 4KB (QD=32)  
    Random Write 4KB (QD=32)

    Test : 1000 MB

    104.513 MB/s
    79.800 MB/s
    37.805 MB/s
    48.630 MB/s
    0.466 MB/s [   113.7 IOPS]
    0.928 MB/s [   226.5 IOPS]
    1.052 MB/s [   256.8 IOPS]
    0.833 MB/s [   203.5 IOPS]

    [C: 39.5% (48.5/123.0 GB)] (x5)
    Date : 2010/08/28 14:17:07

    240.086 MB/s
    107.052 MB/s
    163.351 MB/s
    76.210 MB/s
    22.328 MB/s [  5451.2 IOPS]
    45.021 MB/s [ 10991.4 IOPS]
    23.918 MB/s [  5839.3 IOPS]
    50.876 MB/s [ 12420.9 IOPS]

    [C: 69.7% (103.9/149.0 GB)] (x5)
    Date : 2010/08/28 14:03:54

    OS : Windows 7

    Ultimate Edition [6.1 Build 7600] (x64)

    [6.1 Build 7600] (x64)

     

    * MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

    All in all still a very good 7200rpm at a good prize (114€) for a cost effective solution (speed and storage). But don’t expect SSD like performances! Until SSD drives comes down in price, it's an excellent alternative for someone who needs the storage space or just a new drive :-)
    I am seeing faster boot, faster common app loading, and an overall boost in performance, but is it not because I was upgrading from an old slower drive?

    Note: I did clone my old Western Digital blue WD3200BJKT to the new Seagate XT using Acronis True Image 2010 Home.

  • I am still testing my NAS system (seven 300Gb disks) and while testing OpenSolaris (under Belenix), and Googling I found that page:

    This blog is about the Google Summer of Code project "ZFS filesystem for FUSE/Linux"

    For all of You that do not know what FUSE is, FUSE is the Filesystem in Userspace Linux kernel module. This module allows nonprivileged users to create their own filesystems without writing any kernel code.

    While ZFS has many features which can benefit all kinds of users - from the simple end-user to the biggest enterprise systems:
    • Provable integrity - it checksums all data (and meta-data), which makes it possible to detect hardware errors (hard disk corruption, flaky IDE cables..). 
    • Atomic updates - means that the on-disk state is consistent at all times, there's no need to perform a lengthy filesystem check after forced reboots/power failures.
    • Instantaneous snapshots and clones - it makes it possible to have hourly, daily and weekly backups efficiently, as well as experiment with new system configurations without any risks.
    • Built-in compression, encryption
    • Highly scalable
    • Pooled storage model - creating filesystems is as easy as creating a new directory. You can efficiently have thousands of filesystems, each with it's own quotas and reservations, and different properties (compression algorithm, checksum algorithm, etc..).
    • Built-in stripes (RAID-0), mirrors (RAID-1) and RAID-Z (it's like software RAID-5, but more efficient due to ZFS's copy-on-write transactional model). 
    • Variable sector sizes, adaptive endianness etc...
    In fact this is a sponsored Google summer of code project. Note that Apple is also currently porting ZFS under OS-X. That could mean that ZFS could be mainstream in a future not far away than 2 years.
    And I expect to test RAID-Z...For those interested by RAID-Z raw performances, You can read this highly technical blog entry: WHEN TO (AND NOT TO) USE RAID-Z

    Sun expect to have a stable ZFS version by June 2006.
  • Before putting my monster NAS online (pictures will follow soon), I am playing a lot with NEXENTA under VMWARE player.

    I've found that excellent PDF (THE LAST WORD IN FILE SYSTEMS) which explain why ZFS may be the Saint Graal of file system, while if you want to learn how to administrate pool, I recommend YouThe ZFS admin guide

    Here is my first try, with 7 simulated disks (this example use files and not real devices even if I have 7 real disks sitting next to me ;-)), next steps will be to export the pool as NFS share, plug some disks out, activate encryption, crontab snapshots and remote ssh backup of some vital data.


    # mkdir /vaultcreate a directory for storing all virtual  disks
    # mkfile 64m /vault/disk1
    # mkfile 64m /vault/disk2
    # mkfile 64m /vault/disk3
    # mkfile 64m /vault/disk4
    # mkfile 64m /vault/disk5
    # mkfile 64m /vault/disk6
    # mkfile 64m /vault/disk7
    I create 7 virtual disk name disk1 to disk7
    # zpool status
    no pools available
    check if there is any pool already defined....
    # zpool create nasvault raidz /vault/disk1 /vault/disk2 /vault/disk3  /vault/disk4 /vault/disk5 /vault/disk6 6 disks will be in a raidz pool
    # zpool status
      pool: nasvault
     state: ONLINE
     scrub: none requested
    config:

            NAME             STATE     READ WRITE CKSUM
            nasvault            ONLINE       0     0     0
              raidz              ONLINE       0     0     0
                /vault/disk1  ONLINE       0     0     0
                /vault/disk2  ONLINE       0     0     0
                /vault/disk3  ONLINE       0     0     0
                /vault/disk4  ONLINE       0     0     0
                /vault/disk5  ONLINE       0     0     0
                /vault/disk6  ONLINE       0     0     0
    RAIDZ:

    A replicated RAID-Z configuration can now have
    either single- or double-parity, which means that one or two device failures can be sustained
    respectively, without any data loss. Disks can be of different size, and there is no write hole as found in other RAID arrays.
    df -h /nasvault
    Filesystem             size   used  avail capacity  Mounted on
    nasvault                  384M    16K   384M     1%    /nasvault
    checking size of the pool
    zpool add nasvault raidz /vault/disk5 /vault/disk6Extending pool on the fly with 2 new disks

    Some noise about the development of a mini opensolaris boot file (miniroot.gz) under 60 Mb and able to boot on a USB disk have pop up on OpenSolaris forums. Exactly at the right scheedule for my NAS project, if it can come out in less than 2 weeks, it would be perfect! 
  • Is this the storage device for the future? with 4096 cantilevers, it can actually store  25 DVD on an area the size of a postage stamp!!!!! I remember in 2002 when it had "only" 1024 cantilevers...

    The "millipede" high-density data storage system is based on micromechanical (MEMS*) components borrowed from atomic force microscopy (AFM). Tiny depressions created with an AFM tip in a polymer medium represent stored data bits that can then be read back by the same tip. Data written in this way can also be erased using the same tip, and the polymer medium can be reused thousands of times. This thermomechanical storage technique is capable of achieving data densities exceeding 1 Tb/in², well beyond the expected limits of magnetic recording.

    Here is the page of the laboratory: Developed at the  IBM Zurich Research Laboratory, Switzerland :-)
    Do not forget to access all available pages: Concept and components -  Read/write/erase process  - Recording technology - Servo and media navigation  - Small-scale prototype