Entries Tagged as 'Linux'

Windows Live Essential 2011 – Live Mail

Or perhaps better titled: Why I continue to use a product I hate.

When Outlook Express debuted many years ago Microsoft showed the possibility of creating a email reader for Windows that was clean,simple, and powerful… and for all the problems of Outlook Express it worked.

When Microsoft shipped Windows Vista they abandoned Outlook Express in favor of Windows Mail; largely it appeared to be the same program with a few changes to make it more Vista-like.

But not long after Windows Mail hit the street, Microsoft decided to launch Windows Live Mail, and what appears to be a totally new program modeled after Outlook Express / Windows Mail was launched.  I say it was new because many of the bugs that were present in the BETA of Windows Live Mail were bugs that had been fixed in the Outlook Express code line years before (as an interesting note, several of the bugs I personally reported during the BETA of Windows Live Mail are still present in the newest version – 2011).

The previous version of Live Mail was tolerable; most of the things that were annoying about it had fairly simple ways to resolve them — and in time, maybe we’ll all figure out ways to work around the headaches in 2011; but I just don’t feel like putting so much effort into a POS software package time and time again…

And for those of you who say it’s “FREE” so you get what you get, I’d say, no — it’s not exactly free… Microsoft understands that software like this is necessary in order to have any control over user’s internet habits, so it isn’t free — you’re paying a “price” for it.

Plus, there are other alternatives… Thunderbird for one.

Why don’t I use Thunderbird… simple, there is one “feature” lacking in Thunderbird that prevents me from embracing it.  You cannot export account information and restore it.  Sure Mozbackup will let you backup a complete profile and transfer it to another machine — but I want access to individual email accounts.

Why?  Well, here’s the scenario that I always hit.

I travel, and I tend to take my netbook with me when I travel — and often I’m using my cell phone to access the internet… while it’s “fast” by some standards… if you were to re-sync fifty email accounts each with a dozen IMAP folders, you’d take all day.  Further, most of those email accounts are uninteresting on a day-to-day basis, particularly when I travel — I only want to access a couple of those accounts for sure, but I might want to load an account on demand (you never know).  What I do with Live Mail is I have all the IAF files for all my email accounts stored on the disk (I sync them from my server), and I setup the mail program by loading the three or four that I use routinely, the others I only load as I need them, and I remove them from Live Mail when done.

OK — so that doesn’t fit you… here’s another.

You’ve got several computers, and you’d like to setup your email quickly and painlessly on all of them… but you don’t need all your email accounts on everyone of them — plus you add and remove accounts over time.  Again, Live Mail and it’s import/export handles this nicely.  You simply export a set of IAF files, and then import the ones you want on each machine.

The question is why doesn’t Thunderbird have this ability?

Well, there was a plug in for an older version of Thunderbird that did kinda this; of course it didn’t work that well for the version it was written for, and it doesn’t work at all for newer versions.

One more that I consider an annoyance (but it’s probably slightly more than that) is that there is no easy way in Thunderbird to change the order of accounts in the account window — and they’re not order alphabetically (that would make too much sense), they’re ordered chronologically (based on when you created them).  So you can re-order them, if you delete the accounts and add them back in the order you’d like them to appear; but wait, you can’t add an account any way in Thunderbird by type in all the information again.

And if you’re thinking, OK so write a plug-in that manages account ordering and import/export.  Sure, that would be the “right” thing to do if Thunderbird really had an interface to get to that information easily — but no, it appears you’d have to parse a javaScript settings file… oh joy.

These should be core features of Thunderbird; and in my mind they are huge barriers to wide acceptance.

Originally posted 2010-11-12 02:00:32.

Ubuntu – Creating A Disk Mirror

A disk mirror, or RAID1 is a fault tolerant disk configuration where every block of one drive is mirrored on a second drive; this provides the ability to lose one drive (or have damaged sectors on one drive) and still retain data integrity.

RAID1 will have lower write performance than a single drive; but will likely have slightly better read performance than a single drive.  Other types of RAID configurations will have different characteristics; but RAID1 is simple to configure and maintain (and conceptually it’s easy for most anyone to understand the mechanics) and the topic of this article.

Remember, all these commands will need to be executed with elevated privileges (as super-user), so they’ll have to be prefixed with ‘sudo’.

First step, select two disks — preferably identical (but as close to the same size as possible) that don’t have any data on them (or at least doesn’t have any important data on them).  You can use Disk Utility (GUI) or gparted (GUI) or cfdisk (CLI) or fdisk (CLI) to confirm that the disk has no data and change (or create) the partition type to “Linux raid autotected” (type “fd”) — also note the devices that correspond to the drive, they will be needed when building the array.

Check to make sure that mdadm is installed; if not you can use the GUI package manager to download and install it; or simply type:

  • apt-get install mdadm

For this example, we’re going to say the drives were /dev/sde and /dev/sdf.

Create the mirror by executing:

  • mdadm ––create /dev/md0 ––level=1 ––raid-devices=2 /dev/sde1 missing
  • mdadm ––manage ––add /dev/md0 /dev/sdf1

Now you have a mirrored drive, /dev/md0.

At this point you could setup a LVM volume, but we’re going to keep it simple (and for most users, there’s no real advantage to using LVM).

Now you can use Disk Utility to create a partition (I’d recommend a GPT style partition) and format a file system (I’d recommend ext4).

You will want to decide on the mount point

You will probably have to add an entry to /etc/fstab and /etc/mdadm/mdadm.conf if you want the volume mounted automatically at boot (I’d recommend using the UUID rather than the device names).

Here’s an example mdadm.conf entry

  • ARRAY /dev/md0 level=raid1 num-devices=2 UUID=d84d477f:c3bcc681:679ecf21:59e6241a

And here’s an example fstab entry

  • UUID=00586af4-c0e8-479a-9398-3c2fdd2628c4 /mirror ext4 defaults 0 2

You can use mdadm to get the UUID of the mirror (RAID) container

  • mdadm ––examine ––scan

And you can use blkid to get the UUID of the file system

  • blkid

You should probably make sure that you have SMART monitoring installed on your system so that you can monitor the status (and predictive failure) of drives.  To get information on the mirror you can use the Disk Utility (GUI) or just type

  • cat /proc/mdstat

There are many resources on setting mirrors on Linux; for starters you can simply look at the man pages on the mdadm command.

NOTE: This procedure was developed and tested using Ubuntu 10.04 LTS x64 Desktop.

Originally posted 2010-06-28 02:00:37.

Dreamlinux – because dreams can come true

I’ll have to echo what I said in my previous posts about not looking for a Mac clone, but rather an environment that was usable by ordinary people.

Dreamlinux has potential.

There are a number of visual elements about the interface that I don’t like, and don’t think they’re additive; but the bottom line is Dreamlinux works, it’s very stable, and it has virtually every component installed ready to use right out of the “box”.

Dreamlinux has a long way to go before I would give it a resounding vote of confidence — it’s still very much Linux, and Linux and all it’s geek appeal 0ozes out at every seam…

Geeks just don’t design software or systems to be usable — they haven’t learned that just because you can, doesn’t mean you should!

But like I said, Dreamlinux has potential, and it certainly warrants a thorough examination and review.

Dreamlinux

Originally posted 2010-01-07 01:00:51.

Linux BitTorrent Clients

I’ve been looking at bit torrent (BitTorrent) clients for Linux over the past few weeks — and to say there’s a huge number of candidates wouldn’t do justice to the number of choices a person has… but like so many things in life, quantity and quality are generally on perpendicular axises.

I set a fairly simple set of requirements for the client:

  • Open source
  • Stability
  • Simplicity
  • Configurability
  • Support protocol encryption (require it)
  • Light on resources
  • Ability to handle torrents via URLs

And I set some nice to haves:

  • Search integration
  • Daemon
  • IP black listing (though I use IPBlock, so this is only a nice to have for others)

So once again I set out to limit the field and do some real testing on Ubuntu 10.04LTS… and the ones I ended up really doing more than just kicking the tires are listed below (alphabetically).  Other failed because they didn’t meet my requirements, they were pieces of crap that should be expunged from the world (LOL), or I just didn’t like them enough to waste time and energy on them.  The links for each of the below are to Wikipedia; you can find links on there to the website for each client.  I installed all of the clients via the package manager on Ubuntu.

Deluge · Deluge is a fairly basic program, though has just about every setting configurable that you might want.  It does have a client / server model (use of it is optional); but a single instance of the daemon is unable to handle multiple users; but it does allow you to terminate your session and continue downloading, and it doesn’t seem to have any issue running multiple daemons (one for each user).   This client also offers a number of “plug ins” to provide a block list, a web ui, a schedule, etc — features most others just include as part of the base system.  I wanted to like this client more than I did; but in the end I can only call it acceptable.

KTorrent · KTorrent is a nicely done program, and it has just about every setting configurable that you might want.  Interestingly by default the queue manager is disabled, so it really doesn’t act much like any other bit torrent client I’ve ever used — but enabling it gives you the ability to download multiple torrent at once.  One short coming is you don’t seem to be able to limit the total number of downloads and uploads together — you can do them individually, but that means for trackers that limit your total active connections you could end up not using all of them.  I’ve also noted that this client seems to be a little “fat” and consume a significant amount of system resources (GUI in particular) when left running for extended periods.  I like this client; but there are better.

qBittorrent · qBittorrent is essentially a *nix clone of the Windows version of uTorrent (µTorrent); and it certainly does a good job mimicking it.  It seems to have all the features I wanted; and none of the downsides.  It has a web ui, a ip filter, etc.  It seems to be reasonably light on system resources and just works.  If I had to pick a standalone bit torrent client, this would probably be my recommendation.

TorrentFlux · TorrentFlux is actually a web ui for BitTornado.  There is a fork of the project called TorrentFlux-b4rt that looks like it will eventually offer more features (and support more bit torrent engines) but for the moment TorrentFlux appears to be much more stable.  It’s fairly basic, but has most all the features one might want.  While many of the others offer a web ui, I think this is probably one of the better “server” solutions for bit torrent clients.

Transmission · Transmission is a very simple bit torrent client; perhaps too simple.  It has all the settings you need, as well as a web ui.  It also has ports for just about every operating system (so if you only wanted to deal with one bit torrent client on multiple operating system this would be a good choice).  Transmission has a huge following; but personally I felt it just wasn’t quite what I wanted.

In the end, I guess I didn’t find a bit torrent client that I really liked… but I think TorrentFlux (or a re-incarnation of it) has good potential to be what I want; and I think qBittorrent is probably my favorite of the stand alone clients.  However, in saying that, let me underscore that every client on this list works, and works acceptably well — so I don’t think you’ll go wrong with any of them… and I’m sure that those with a religious conviction to one or the other will just not accept that their favorite client doesn’t top my list… but in fact, I’m holding the tops slots of my list open hoping I find something better.

NOTE: The use of torrents for downloading does not necessarily denotate that a user is breaking any laws.  That said, because many internet service providers will terminate a user that is using a torrent client, it is a good idea to require encrypted connections and use IP filtering software (with current black lists).

Originally posted 2010-08-16 02:00:55.

OpenGEU – Luna Serena

Let me start by saying I like OpenGEU quite a bit; it’s a very nicely done distribution, it seems to be solid, and it seems to have most of what an individual would want loaded by default.

However…

It’s not really very Mac-ish.

So before I continue talking about my finding on OpenGEU I want to redefine the parameters…

In my mind it’s not necessary for an operating system to mimic Windows or OS-X in order to have a reasonably good usability, in fact we can see from the steady evolution of the operating system and the money and resources that Microsoft and Apple throw at the problem that they don’t have it right — they just feel they’re on the right path.

So… I’m not looking for a Mac clone (if I were I would have put Hackintosh on the original list); I’m looking for an operating system default installation that achieves a highly usable system that non-computer users will be comfortable using and highly productive on from the start.

Now I feel like I should find an attorney to write me a lengthy disclaimer…

OpenGEU may well be a very good candidate for non-computer users who wish to find alternatives to Microsoft and Apple (either because they simply don’t have the money to stay on the upgrade roller-coaster or because they feel they do not want their productivity and destiny tied so closely to a commercial software venture).

OpenGEU installs easily, it creates a simple, easy to use, easy to understand desktop environment.  Most every tool you might want or need is there; and of course the package manager can help you get updates and new software fairly easily.

While I cannot tell you that all the multimedia software that I would like to see are present by default, there’s enough to get the average user started.

The overwhelming characteristic of OpenGEU that I feel I must underscore is how clean the appearance is — a testament to the fact that a designer may in fact be much better qualified to create human usable software than an engineer is.

OpenGEU makes the cut; and deserves a thorough evaluation.

I’ll publish a much more extensive article on OpenGEU when I’ve finished going through the candidates and had more time to use it… but I’m excited at the possibilities!

OpenGEU

Originally posted 2010-01-06 01:00:41.

Ubuntu – Disk Utility

When you install Ubuntu 10.04 Desktop, the default menu item for Disk Utility isn’t extremely useful; after all, it’s on the System->Administration menu, so you would assume that it’s meant to administer the machine, not just view the disk configuration.

What I’m alluding to is that by default Disk Utility (/usr/bin/palimpsest) is not run with elevated privileges (as super-user), but rather as the current user — which if you’re doing as you should be, that’s means you won’t be able to effect any changes, and Disk Utility will probably end up being a waste of time and effort.

To correct this problem all you need do is modify the menu item which launches Disk Utility to elevate your privileges before launching (using gksu) — that, of course, assumes that you’re permitted to elevate your privileges.

To do add privilege elevation to disk utility:

  1. Right click your mouse on the menu bar along the top (right on system is good) and select ‘edit menu items’
  2. Navigate down to ‘administration’ and select it in the left pane
    Select ‘disk utility’ in the right pane
  3. Select ‘properties’ in the buttons on the right
  4. Under ‘command’ prefix it with ‘gksu’ or substitute ‘gksu /usr/bin/palimpsest’ (putting the entire path there)
  5. Then click ‘close’ and ‘close’ again…

Originally posted 2010-06-27 02:00:33.

Linux Server

I’ve been experimenting with a Linux server solution for the past couple months — I was prompted to look at this when my system disk failed in a Windows Server 2008 machine.

First, I’m amazed that after all these years Microsoft doesn’t have a standard module for monitoring the health of a system — at the SMART from disk drives.

I do have an Acronis image of the server from when I first installed it, but it would be a pain to reconfigure everything on that image to be as it was — and I guess I just haven’t been that happy with Windows Server 2008.

I personally find Windows Server 2008 needlessly complicated.

I’m not even going to start ranting on Hyper-V (I’ve done that enough, comparing it head-to-head with other technology… all I will say is it’s a good thing their big competitor is Vmware, or else Microsoft would really have to worry about having such a pathetic virtualization offering).

With a Linux distribution it’s a very simple thing to install a basic server. I actually tried Ubuntu, Centos, and Fedora. I also looked at the Xen distribution as well, but that wasn’t really of interest for a general purpose server.

Personally I found Centos (think Red Hat) to be a little too conservative on their releases/features; I found Fedora to be a little too bleeding edge on their releases/features (plus there’s no long term support commitment); so I was really just left with Ubuntu.

I didn’t really see any reason to look exhaustively at every Debian based distribution — Ubuntu was, in my mind, the best choice of that family; and I didn’t want to look at any distribution that wasn’t available at no cost, nor any distribution that didn’t have a good, stable track record.

With Ubuntu 10.04 LTS (10.04 is a Long Term Support release – which makes it a very good choice to build a server on) you could choose the Desktop or the Server edition — the main difference with the Server verses the Desktop is that the server does not install the XServer and graphical desktop components (you can add them).

The machine I was installing on had plenty of memory and processor to support a GUI, and I saw no reason not to install the Desktop version (I did try out the server version on a couple installs — and perhaps if you have an older machine or a machine with very limited memory or a machine that will be taxed to it’s limits or a machine that you want the absolute smallest attack surface you’d want desktop — though almost all those requirements would probably make me shift to Centos rather than Ubuntu).

My requirements were fairly simple — I wanted to replace the failed Windows 2008 Server with a machine that could perform my DNS, DHCP, web server, file store (home directories — served via CIFS/Samba), and active P2P downloads.

Additionally, the server would have to have fault-tolerate file systems (as did the Windows server).

Originally my testing focused on just making sure all the basic components worked, and worked reasonably well.

Then I moved on to getting all the tools I had written working (I converted all the C# code to PHP).

My final phase involved evaluating fault tolerant options. Initially I’d just used the LSI 150-4 RAID controller I had in the Windows Server 2008 (Linux supported it with no real issues — except that Linux was not able to monitor the health of the drives or the array).

I didn’t really see much need to use RAID5 as I had done with Windows Server 2008; so I concentrated on just doing RAID1 (mirroring) — I tried basic mirrors just using md, as well as using lvm (over md).

My feelings were that lvm added an unnecessary level of complexity on a standalone server (that isn’t to say that lvm doesn’t have feature that some individuals might want or need). So my tests focused primarily on just simple mirrors using md.

I tested performance of my LSI 150-4 RAID5 SATA1 PCI controller (with four SATA2 drives) against RAID1 SATA2 using Intel ICH9 and SiI3132 controllers (with pairs of SATA1 or SATA2 drives). I’d expected that the LSI 150-4 would outperform the md mirror with SATA1 drives on both read and write, but that with SATA2 drives I’d see better reads on the md mirror.

I was wrong.

The md mirrors actually performed better across the board (though negligibly better with SATA1 drives attached) — and the amazing thing was that CPU utilization was extremely low.

Now, let me underscore here that the LSI 150-4 controller is a PCI-X (64-bit) controller that I’m running as PCI (32-bit); and the LSI 150-4 represents technology that’s about six years old… and the LSI 150-4 controller is limited to SATA1 with no command set enhancements.

So this comparison wouldn’t hold true if I were testing md mirrors against a modern hardware RAID controller — plus the other RAID controllers I have are SAS/SATA2 PCIe and have eight and sixteen channels (more spindles means more performance).

Also, I haven’t tested md RAID5 performance at all.

My findings at present are that you can build a fairly high performance Linux based server for a small investment. You don’t need really high end hardware, you don’t need to invest in hardware RAID controllers, and you don’t need to buy software licenses — you can effectively run a small business or home office environment with confidence.

Originally posted 2010-06-24 02:00:09.

Ubuntu – RAID Creation

I think learning how to use mdadm (/sbin/mdadm) is a good idea, but in Ubuntu Desktop you can use Disk Utility (/usr/bin/palimpsest) to create most any of your RAID (“multiple disk”) configurations.

In Disk Utility, just access “File->Create->Raid Array…” on the menu and choose the options.  Before doing that, you might want to clear off the drives you’re going to use (I generally create a fresh GTP partition to insure the drive is ready to be used as a component of the RAID array).

Once you’ve created the container with Disk Utility; you can even format it with a file system; however, you will still need to manually add the entries to /etc/mdadm/mdadm.conf and /etc/fstab.

One other minor issue I noticed.

I gave my multiple disk containers names (mirror00, mirror01, …) and Disk Utility will show them mounted on device /dev/md/mirror00 — in point of fact, you want to use device names like /dev/md0, /dev/md1, … in the /etc/mdadm/mdadm.conf file.  Also, once again, I highly recommend that you use the UUID for the array configuration (in mdadm.conf) and for the file system (in fstab).

Originally posted 2010-07-12 02:00:33.

Linux File System Fragmentation

I’ve always found it hilarious that *nix bigots (particularly Linux bigots) asserted that their file systems, unlike those found in Windows, didn’t fragment.

HA HA

Obviously most anyone who would make that assertion really doesn’t know anything about file systems or Windows.

It’s true that back in the ancient times of Windows when all you had was FAT or FAT32 that fragmentation was a real problem; but as of the introduction for HPFS in OS/2 and then NTFS in Windows NT fragmentation in a Windows system was on par with fragmentation in a *nix system.

Though you’ll recall that in Windows, even with NTFS, defragmentation was possible and tools to accomplish it were readily available (like included with the operating system).

Ext2, Ext3, Ext4 — and most any other file system known to man might (like NTFS) attempt to prevent file system fragmentation, but it happens — and over time it can negatively impact performance.

Interesting enough, with Ext4 there appears to be fewer *nix people in that great river in Egypt — d Nile… or denial as it were.

Ext4 is a very advanced file system; and most every trick in the book to boost performance and prevent fragmentation is includes — along with the potential for defragmentation.  The tool e4defrag will allow for the defragmentation of single files or entire file systems — though it’s not quite ready… still a few more kernel issues to be worked out to allow it to defragment a live file system.

With Ext4 as with NTFS one way you can defragment a file is copy it, the file system itself will attempt to locate an area of the disk that can hold the file in continuous allocation unites — but, of course, the file system’s performance can often be increased to coalescing the free space, or at least coalescing free space that is likely too small to hold a file.

As I said when I started; I’ve always found it hilarious that *nix bigots often don’t have a very good understanding of the technical limitations and strengths of various pieces of an operating system… but let me underscore just because people don’t always know what they’re talking about doesn’t necessarily mean that the solution they’re evangelizing might not be something that should be considered.

Originally posted 2010-06-03 02:00:06.

Ubuntu – Desktop Search

Microsoft has really shown the power of desktop search in Vista and Windows 7; their newest Desktop Search Engine works, and works well… so in my quest to migrate over to Linux I wanted to have the ability to have both a server style as well as a desktop style search.

So the quest begun… and it was as short a quest as marching on the top of a butte.

I started by reviewing what I could find on the major contenders (just do an Internet search, and you’ll only find about half a dozen reasonable articles comparing the various desktop search solutions for Linux)… which were few enough it didn’t take very long (alphabetical):

My metrics to evaluate a desktop search solutions would focus on the following point:

  • ease of installation, configuration, maintenance
  • search speed
  • search accuracy
  • ease of access to search (applet, web, participation in Windows search)
  • resource utilization (cpu and memory on indexing and searching)

I immediately passed on Google Desktop Search; I have no desire for Google to have more access to information about me; and I’ve tried it before in virtual machines and didn’t think very much of it.

Begal

I first tried Beagle; it sounded like the most promising of all the search engines, and Novel was one of the developers behind it so I figured it would be a stable baseline.

It was easy to install and configure (the package manager did most of the work); and I could use the the search application or the web search, I had to enable it using beagle-config:

beagle-config Networking WebInterface true

And then I could just goto port 4000 (either locally or remotely).

I immediately did a test search; nothing came back.  Wow, how disappointing — several hundred documents in my home folder should have matched.  I waited and tried again — still nothing.

While I liked what I saw, a search engine that couldn’t return reasonable results to a simple query (at all) was just not going to work for me… and since Begal isn’t actively developed any longer, I’m not going to hold out for them to fix a “minor” issue like this.

Tracker

My next choice to experiment with was Tracker; you couldn’t ask for an easier desktop search to experiment with on Ubuntu — it seems to be the “default”.

One thing that’s important to mention — you’ll have to enable the indexer (per-user), it’s disabled by default.  Just use the configuration tool (you might need to install an additional package):

tracker-preferences

Same test, but instantly I got about a dozen documents returned, and additional documents started to appear every few seconds.  I could live with this; after all I figured it would take a little while to totally index my home directory (I had rsync’d a copy of all my documents, emails, pictures, etc from my Windows 2008 server to test with, so there was a great deal of information for the indexer to handle).

The big problem with Tracker was there was no web interface that I could find (yes, I’m sure I could write my own web interface; but then again, I could just write my own search engine).

Strigi

On to Strigi — straight forward to install, and easy to use… but it didn’t seem to give me the results I’d gotten quickly with Tracker (though better than Beagle), and it seemed to be limited to only ten results (WTF?).

I honestly didn’t even look for a web interface for Strigi — it was way too much a disappointment (in fact, I think I’d rather have put more time into Beagle to figure out why I wasn’t getting search results that work with Strigi).

Recoll

My last test was with Recoll; and while it looked promising from all that I read, but everyone seemed to indicate it was difficult to install and that you needed to build it from source.

Well, there’s an Ubuntu package for Recoll — so it’s just as easy to install; it just was a waste of effort to install.

I launched the recoll application, and typed a query in — no results came back, but numerous errors were printed in my terminal window.  I checked the preferences, and made a couple minor changes — ran the search query again — got a segmentation fault, and called it a done deal.

It looked to me from the size of the database files that Recoll had indexed quite a bit of my folder; why it wouldn’t give me any search results (and seg faulted) was beyond me — but it certainly was something I’d seen before with Linux based desktop search.

Conclusions

My biggest conclusion was that Desktop Search on Linux just isn’t really something that’s ready for prime time.  It’s a joke — a horrible joke.

Of the search engines I tried, only Tracker worked reasonably well, and it has no web interface, nor does it participate in a Windows search query (SMB2 feature which directs the server to perform the search when querying against a remote file share).

I’ve been vocal in my past that Linux fails as a Desktop because of the lack of a cohesive experience; but it appears that Desktop Search (or search in general) is a failing of Linux as both a Desktop and a Server — and clearly a reason why choosing Windows Server 2008 is the only reasonable choice for businesses.

The only upside to this evaluation was that it took less time to do than to read about or write up!

Originally posted 2010-07-06 02:00:58.