Entries Tagged as 'File Systems'

rsync

I’ve gotta ask the question why no one has ever invested the time and energy into making a port of the client code in rsync as a native Windows executable.

Yes, you can certainly run rsync client (and server even) under Cygwin; but it just seems like backing up files on a workstation to a central store efficiently is something that might have occurred to a Windows person over the years.

Could it be that people only think in terms of homogeneous solutions?  That they simply can’t conceive that the best selection of a server might not involve the same operating system as a workstation or desktop?

Yeah — I understand that since many Windows desktops talk to Windows servers rsync isn’t a commercially viable solution (or even hobbyist solution) unless you have both server and client, but in many cases a Windows desktop talks to a *nix based sever (or NAS) and all you really need to be able to do is run an rsync client.

The benefits of rsync seems to be to be well worth implementing a client on Windows — while the newest version of the file sharing protocol in Windows Vista, Windows Server 2008, and Windows 7 have the ability to do differential file copy, it’s not something that’s likely to implemented in an optimized fashion in non-Microsoft storage systems (and isn’t going to be implement in Windows XP or Windows Server 2003 at all); nor is there any reason to really depend of a file sharing protocol for synchronization.

Anyway — rsync is a very efficient tool, and something you might want to keep in your toolbox.

Originally posted 2010-06-04 02:00:01.

Defragmenting

There are many people out there that say that *nix and Mac file systems don’t fragment — only Windows does.

They’re dead wrong.

[I know I’ve said this before, but it’s worth saying again]

All three file systems (and in Windows we’re talking about NTFS, not FAT) derive from the same basic file system organization, and all three have pretty much the same characteristics (there are differences, but those really have nothing to do with the likelihood of fragmentation).

Fragmentation is just a by-product of the way a file system works.  The file system must make decisions about how to lay files down on the disk, and since it doesn’t have a crystal ball it cannot see the future.  Thus is a file is pinned between two other files and it must grow, the file would either need to be moved (creating an empty spot of a maximum size) or extended in another area (thus being fragmented).

There are various schemes for handling file allocations, but most of them rely on an application that is creating the file giving the operating system (and thus file system) sufficient information on the files maximum size and hints as to whether it is temporary, may grow, etc.

Given that file systems will fragment, the need for defragmentation is real.  Windows recognizes this (mainly because Windows used to use a FAT file system where fragmentation caused severe performance issues).

If you have a *nix or Mac based system, I’m sure you can locate a reasonably good defragmenter (not every one is in denial about the need for periodically defragmenting the system).  If you have  Windows based system you already have a reasonably good defragmenter that came with the systems (a “lite” version of Executive Systems Diskeeper, which now just goes by the name of Diskeeper Corporation).  You can, of course, purchase a number of commercial products, like the full blown Diskeeper, O&O Defrag (my personal favorite), or download a host of free or inexpensive products.

The key to defragmenting your system is knowing when you should invest the time (and wear on your disks).  The most accurate answer would be when system fragmentation reaches a point where it adversely effects performance.  That seems a little vague, but most of the defragmentation tools actually will do an analysis and advise you if they should be run.  Some of them have active defragmentation (but like the file system, they don’t have a crystal ball, and will often cost performance, not enhance it — so I would just say no to active defragmentation).

A good rule of thumb is that right after you install you system, or any time you install major updates or service packs you should defragment your system.  It’s a good idea to clean off temporary files (like your browser cache, etc) before you defragment.  And you might even want to clean off old restore points (if you have them enabled).

There’s certainly no reason to defragment your system daily or weekly; but an occasional night of running your defragmenter of choice will likely decrease boot time and increase overall system performance.

One other little tid-bit — remove your paging file before defragmenting; then after you’re finished, create a new paging file of a fixed size (ie set the minimum and maximum to the same thing).  That way you have a nicely defragmented paging file that will not cause fragmentation or fragment itself (leading to better system performance).  Of course, if your system has enough memory to run without a paging file, you don’t need one at all.

Originally posted 2010-02-21 01:00:20.

Virtual machines need regular defragging, researcher says

This comes from an article on ComputerWorld, all I can say is duh!

Virtual disks require the same fragmentation as the same operating system would running on physical machines; plus if you choose dynamically expanding containers for the disk on the host, you’ll likely need to power down the machine and periodically defragment the host as well.

You’d think that an article that starts with a title like that couldn’t possible get any more asinine; well, you’d be wrong:

Windows, as well as third-party software firms, offer defragmenters to reassemble fragmented files. Fragmentation is not as large of a problem on Unix systems, due to the way that the OS writes files to disk.

Apparently the author seems to think that just because Windows includes software to defragment the file system, it must be much more susceptible to fragmentation.  He’d be right if we were talking about Windows 98 or if people choose not to run NTFS… but he and the article he references are dead wrong.

NTFS has almost identical abilities as EXT2, EXT3, and EXT4 file systems to avoid fragmentation — the difference is that NTFS supports defragmentation of the file system (and Windows ships with a rudimentary defragmenter).  In fact, if *nix file system were so impervious to fragmentation, why would the ability to defragment be one of the major feature additions in EXT4 (though not fully implemented yet)?

There are many thing about *nix type operating systems that can clearly be pointed to as superior than Windows, the resistance to fragmentation simply isn’t one; WAKE UP and live in the current millennium, we don’t need to confuse FAT16/FAT32 with Windows.

Virtual machines need regular defragging, researcher says
By Joab Jackson on ComputerWorld

Originally posted 2010-10-12 02:00:44.

Linux File System Fragmentation

I’ve always found it hilarious that *nix bigots (particularly Linux bigots) asserted that their file systems, unlike those found in Windows, didn’t fragment.

HA HA

Obviously most anyone who would make that assertion really doesn’t know anything about file systems or Windows.

It’s true that back in the ancient times of Windows when all you had was FAT or FAT32 that fragmentation was a real problem; but as of the introduction for HPFS in OS/2 and then NTFS in Windows NT fragmentation in a Windows system was on par with fragmentation in a *nix system.

Though you’ll recall that in Windows, even with NTFS, defragmentation was possible and tools to accomplish it were readily available (like included with the operating system).

Ext2, Ext3, Ext4 — and most any other file system known to man might (like NTFS) attempt to prevent file system fragmentation, but it happens — and over time it can negatively impact performance.

Interesting enough, with Ext4 there appears to be fewer *nix people in that great river in Egypt — d Nile… or denial as it were.

Ext4 is a very advanced file system; and most every trick in the book to boost performance and prevent fragmentation is includes — along with the potential for defragmentation.  The tool e4defrag will allow for the defragmentation of single files or entire file systems — though it’s not quite ready… still a few more kernel issues to be worked out to allow it to defragment a live file system.

With Ext4 as with NTFS one way you can defragment a file is copy it, the file system itself will attempt to locate an area of the disk that can hold the file in continuous allocation unites — but, of course, the file system’s performance can often be increased to coalescing the free space, or at least coalescing free space that is likely too small to hold a file.

As I said when I started; I’ve always found it hilarious that *nix bigots often don’t have a very good understanding of the technical limitations and strengths of various pieces of an operating system… but let me underscore just because people don’t always know what they’re talking about doesn’t necessarily mean that the solution they’re evangelizing might not be something that should be considered.

Originally posted 2010-06-03 02:00:06.