Entries Tagged as 'Linux'

Ubuntu – Disk Utility

When you install Ubuntu 10.04 Desktop, the default menu item for Disk Utility isn’t extremely useful; after all, it’s on the System->Administration menu, so you would assume that it’s meant to administer the machine, not just view the disk configuration.

What I’m alluding to is that by default Disk Utility (/usr/bin/palimpsest) is not run with elevated privileges (as super-user), but rather as the current user — which if you’re doing as you should be, that’s means you won’t be able to effect any changes, and Disk Utility will probably end up being a waste of time and effort.

To correct this problem all you need do is modify the menu item which launches Disk Utility to elevate your privileges before launching (using gksu) — that, of course, assumes that you’re permitted to elevate your privileges.

To do add privilege elevation to disk utility:

  1. Right click your mouse on the menu bar along the top (right on system is good) and select ‘edit menu items’
  2. Navigate down to ‘administration’ and select it in the left pane
    Select ‘disk utility’ in the right pane
  3. Select ‘properties’ in the buttons on the right
  4. Under ‘command’ prefix it with ‘gksu’ or substitute ‘gksu /usr/bin/palimpsest’ (putting the entire path there)
  5. Then click ‘close’ and ‘close’ again…

Originally posted 2010-06-27 02:00:33.

Linux Server

I’ve been experimenting with a Linux server solution for the past couple months — I was prompted to look at this when my system disk failed in a Windows Server 2008 machine.

First, I’m amazed that after all these years Microsoft doesn’t have a standard module for monitoring the health of a system — at the SMART from disk drives.

I do have an Acronis image of the server from when I first installed it, but it would be a pain to reconfigure everything on that image to be as it was — and I guess I just haven’t been that happy with Windows Server 2008.

I personally find Windows Server 2008 needlessly complicated.

I’m not even going to start ranting on Hyper-V (I’ve done that enough, comparing it head-to-head with other technology… all I will say is it’s a good thing their big competitor is Vmware, or else Microsoft would really have to worry about having such a pathetic virtualization offering).

With a Linux distribution it’s a very simple thing to install a basic server. I actually tried Ubuntu, Centos, and Fedora. I also looked at the Xen distribution as well, but that wasn’t really of interest for a general purpose server.

Personally I found Centos (think Red Hat) to be a little too conservative on their releases/features; I found Fedora to be a little too bleeding edge on their releases/features (plus there’s no long term support commitment); so I was really just left with Ubuntu.

I didn’t really see any reason to look exhaustively at every Debian based distribution — Ubuntu was, in my mind, the best choice of that family; and I didn’t want to look at any distribution that wasn’t available at no cost, nor any distribution that didn’t have a good, stable track record.

With Ubuntu 10.04 LTS (10.04 is a Long Term Support release – which makes it a very good choice to build a server on) you could choose the Desktop or the Server edition — the main difference with the Server verses the Desktop is that the server does not install the XServer and graphical desktop components (you can add them).

The machine I was installing on had plenty of memory and processor to support a GUI, and I saw no reason not to install the Desktop version (I did try out the server version on a couple installs — and perhaps if you have an older machine or a machine with very limited memory or a machine that will be taxed to it’s limits or a machine that you want the absolute smallest attack surface you’d want desktop — though almost all those requirements would probably make me shift to Centos rather than Ubuntu).

My requirements were fairly simple — I wanted to replace the failed Windows 2008 Server with a machine that could perform my DNS, DHCP, web server, file store (home directories — served via CIFS/Samba), and active P2P downloads.

Additionally, the server would have to have fault-tolerate file systems (as did the Windows server).

Originally my testing focused on just making sure all the basic components worked, and worked reasonably well.

Then I moved on to getting all the tools I had written working (I converted all the C# code to PHP).

My final phase involved evaluating fault tolerant options. Initially I’d just used the LSI 150-4 RAID controller I had in the Windows Server 2008 (Linux supported it with no real issues — except that Linux was not able to monitor the health of the drives or the array).

I didn’t really see much need to use RAID5 as I had done with Windows Server 2008; so I concentrated on just doing RAID1 (mirroring) — I tried basic mirrors just using md, as well as using lvm (over md).

My feelings were that lvm added an unnecessary level of complexity on a standalone server (that isn’t to say that lvm doesn’t have feature that some individuals might want or need). So my tests focused primarily on just simple mirrors using md.

I tested performance of my LSI 150-4 RAID5 SATA1 PCI controller (with four SATA2 drives) against RAID1 SATA2 using Intel ICH9 and SiI3132 controllers (with pairs of SATA1 or SATA2 drives). I’d expected that the LSI 150-4 would outperform the md mirror with SATA1 drives on both read and write, but that with SATA2 drives I’d see better reads on the md mirror.

I was wrong.

The md mirrors actually performed better across the board (though negligibly better with SATA1 drives attached) — and the amazing thing was that CPU utilization was extremely low.

Now, let me underscore here that the LSI 150-4 controller is a PCI-X (64-bit) controller that I’m running as PCI (32-bit); and the LSI 150-4 represents technology that’s about six years old… and the LSI 150-4 controller is limited to SATA1 with no command set enhancements.

So this comparison wouldn’t hold true if I were testing md mirrors against a modern hardware RAID controller — plus the other RAID controllers I have are SAS/SATA2 PCIe and have eight and sixteen channels (more spindles means more performance).

Also, I haven’t tested md RAID5 performance at all.

My findings at present are that you can build a fairly high performance Linux based server for a small investment. You don’t need really high end hardware, you don’t need to invest in hardware RAID controllers, and you don’t need to buy software licenses — you can effectively run a small business or home office environment with confidence.

Originally posted 2010-06-24 02:00:09.

Ubuntu – RAID Creation

I think learning how to use mdadm (/sbin/mdadm) is a good idea, but in Ubuntu Desktop you can use Disk Utility (/usr/bin/palimpsest) to create most any of your RAID (“multiple disk”) configurations.

In Disk Utility, just access “File->Create->Raid Array…” on the menu and choose the options.  Before doing that, you might want to clear off the drives you’re going to use (I generally create a fresh GTP partition to insure the drive is ready to be used as a component of the RAID array).

Once you’ve created the container with Disk Utility; you can even format it with a file system; however, you will still need to manually add the entries to /etc/mdadm/mdadm.conf and /etc/fstab.

One other minor issue I noticed.

I gave my multiple disk containers names (mirror00, mirror01, …) and Disk Utility will show them mounted on device /dev/md/mirror00 — in point of fact, you want to use device names like /dev/md0, /dev/md1, … in the /etc/mdadm/mdadm.conf file.  Also, once again, I highly recommend that you use the UUID for the array configuration (in mdadm.conf) and for the file system (in fstab).

Originally posted 2010-07-12 02:00:33.

Linux File System Fragmentation

I’ve always found it hilarious that *nix bigots (particularly Linux bigots) asserted that their file systems, unlike those found in Windows, didn’t fragment.

HA HA

Obviously most anyone who would make that assertion really doesn’t know anything about file systems or Windows.

It’s true that back in the ancient times of Windows when all you had was FAT or FAT32 that fragmentation was a real problem; but as of the introduction for HPFS in OS/2 and then NTFS in Windows NT fragmentation in a Windows system was on par with fragmentation in a *nix system.

Though you’ll recall that in Windows, even with NTFS, defragmentation was possible and tools to accomplish it were readily available (like included with the operating system).

Ext2, Ext3, Ext4 — and most any other file system known to man might (like NTFS) attempt to prevent file system fragmentation, but it happens — and over time it can negatively impact performance.

Interesting enough, with Ext4 there appears to be fewer *nix people in that great river in Egypt — d Nile… or denial as it were.

Ext4 is a very advanced file system; and most every trick in the book to boost performance and prevent fragmentation is includes — along with the potential for defragmentation.  The tool e4defrag will allow for the defragmentation of single files or entire file systems — though it’s not quite ready… still a few more kernel issues to be worked out to allow it to defragment a live file system.

With Ext4 as with NTFS one way you can defragment a file is copy it, the file system itself will attempt to locate an area of the disk that can hold the file in continuous allocation unites — but, of course, the file system’s performance can often be increased to coalescing the free space, or at least coalescing free space that is likely too small to hold a file.

As I said when I started; I’ve always found it hilarious that *nix bigots often don’t have a very good understanding of the technical limitations and strengths of various pieces of an operating system… but let me underscore just because people don’t always know what they’re talking about doesn’t necessarily mean that the solution they’re evangelizing might not be something that should be considered.

Originally posted 2010-06-03 02:00:06.

Linux usability

While doing my preliminary look at usability in several Linux distributions that had adopted a Mac-ish paradigm I decided I needed to lay several ground rules to fully review them.

First, I decided that using a virtual machine was fine for getting intial impressions, but that just wasn’t going to be acceptable for a complete review… and I also decide that doing a review on only one piece of hardware wasn’t going to give me a very good idea of what problems a user might see related to the computer.

It’s certainly no problem for me to find a computer or two to install these Linux distributions on and run them through their paces; however, I don’t have any “low-end” hardware, so my tests are going to use fairly current generations of hardware, so be aware that my impressions might not match your impression if you’re planning on running these on hardware that is more than a couple years old (and by a couple year old I mean hardware who’s components were current no more than two years ago).

I’ll perform the following:

  1. Install the distribution (without requiring any settings manually)
  2. Update itself (and applications)
  3. Start up, shut down, log on, log off
  4. Browse the web (that’s a given)
  5. Read email (including setting up the email program)
  6. Play a CD (music)
  7. Play several music files
  8. Play a DVD (movie)
  9. Play several video files
  10. Edit a WYSIWYG document
  11. Edit an image
  12. View and print a PDF
  13. Access a thumb drive
  14. Access files stored on a network device
  15. Access secure digital media (though a USB card reader)
  16. Scan an image
  17. Open a ZIP archive; create a ZIP archive
  18. Email an attachment, recover an email attachment
  19. Install a new (and useful) application
  20. Alter the appearance (preferably using a theme)

Beyond these simple tests I’ll try and appraise the simplicity, clarity, and ease of use of the interface… I’ll also comment on the overall appearance, the look and feel.

Originally posted 2010-01-08 01:00:19.

Linux BitTorrent Clients – Follow-Up

I’ve been using several Linux bit torrent clients fairly heavily for the past week or so, and I have a few new comments about each of the “contenders” — below I’ve ordered them as I would recommend using them.

KTorrent · KTorrent might be a little “fat”, but it works, and it works very well — particularly when dealing with a large number of torrents simultaneously.  This is my pick.

TorrentFlux · TorrentFlux is probably the best solution you’ll find for a torrent server.  Simply said, it works fine (though I don’t know that I’ll continue to use it, simply because it doesn’t seem to be being improved, and it’s far from perfection).

Transmission · Transmission is simple, and that simplicity seems to pay off — it works, it works well.

qBittorrent · qBittorrent works fairly well for a small number of simultaneous torrents; but if you want to download large numbers of torrents or seed large numbers of torrents stay away from this one — it actually crashes, and unless your goal is just to watch the integrity of your torrents be checked and over and over you can do much better.

Deluge · Deluge was what I really wanted to like; and it seemed to work, but it has two major problems — it doesn’t handle large numbers of torrents well, and it doesn’t properly handle port forwarding (either through UPnP / NAT-PMP or when you try and set the port forwarding manually).  We’ll just leave it at it has issues (that apparently are fairly well known) and the progress on it is glacial in it’s pace.

Moving torrents from one client to another isn’t all that hard to do, a little time consuming maybe… but once you figure out how to do it, and let your data files re-check, you’ll be on your way.

My experience over the past week reminds me that you can do your diligence by researching every fact and figure about a program all you like; but until you put it through the paces you just won’t know.

NOTES: My test included about 550 torrents totaling just under half a terabyte in total size.  I required that ports be forwarded through a firewall properly (either via UPnP, NAT-PMP, or by hand), and that I be able to control the total number of active torrents (preferably with control over uploads and downloads as well), and be able to restrict the bandwidth (a scheduler was a nice touch, but not a requirement).

Originally posted 2010-08-25 02:00:30.

Video Encoding

A little over a year ago one of my friends with a Mac wanted to get into re-encoding video; I knew about the tools to do it on a PC, but none of the tools really had a OS-X port at that time, so I set out on a quest to find tools that could enable a person who didn’t know much about video encoding to accomplish it.

One of the first tools I stumbled on was HandBrake; it was an Open Source project leveraging off of a number of other Open Source products intended on creating a cross platform suite of tools for video encoding that was reasonably straight forward to use and produced reasonable good results.

Well, the version I tested was a near total failure… but the project showed promise and I keep tabs on it for quite some time.

Over the past year it’s steadily improved.  In fact, I’m probably being a little hard on it, since right after I played with an early version a much improved version was available that did work, and did allow my friend to accomplish what he wanted.

Last month HandBrake released a new version — a much improved version.

With Windows, OS-X, and Linux versions you can try out HandBrake for yourself and see the results.

I did two separate tests (and for some reason I always use the same two DVD titles — Saving Private Ryan, and Lord of the Rings — the reason is that both movies have a wide range of  video type from near still images to sweeping panoramic views to everything in motion (blowing up)…

I had two separate machines (a Q9300 and a Q9400 both with 8GB of DDR2) doing the encodes, and did both normal and high profiles; one test was H.264 into a MPEG4 container with AAC created from the AC3 5.1 track; the other was H.264 into a MKV container with AAC created from the AC3 5.1 track in addition to AC3 5.1 pass-through and Dolby Surround pass-through with [soft] subtitles.

For the high profiles: Lord of the Rings took a little over three hours; Saving Private Ryan took just under two and a half hours — so don’t get in a hurry, in fact, run it over night and don’t bother the computer(s).

The high profile achieved about a 2:1 reduction in size; the normal profile achieved about a 4:1 reduction in size.  The high profile’s video was stunning, the normal profile’s video was acceptable.  The AAC audio was acceptable; the AC3 5.1 was identical to the source, and in perfect sync.

There are a number of advantages to keeping your video in a MPEG4 or MKV container verses a DVD image… it’s much easier to catalog and play, and of course it’s smaller (well, you could keep the MPEG2-TS in a MKV and it would be identically sized, but I see little reason for that).

The downside of RIPping your DVDs is that you lose the navigation stream and the extra material.  Do you care???

HandBrake will read source material in just about any format imaginable (and in almost any container as well)… you can take a look at it’s capabilities and features online.

I’ve got some VCR capture streams in DV video that I’m encoding now — trying a few of the more advanced settings in HandBrake to see how it works (well, that’s not really testing HandBrake, that’s testing the H.264 encoder).  My expectation is that once I get the settings right, it will do a fine job; but with video captures you should never expect the first try to be the best (well, I’m never that lucky).

While HandBrake is very easy to use, your ability to get really good results from it is going to partially depend on how willing you are to learn a little about video re-encoding (which will require a little reading and a little experimentation).   But that said, NO product is going to magically just do the right thing in every case…

Overall I would say that HandBrake is one of the best video encoders you’re going to find, and you cannot beat the price — FREE!

Here’s some additional notes.

For Windows 7 you will want to download the DivX trial and just install the MKV splitter (nothing else is needed) so that Windows 7 can play media in a MKV container using it’s native CODECs.

With Windows Media Play 12 and Media Center I haven’t figured out how to switch audio streams; so make sure you encode with the audio stream you want as a default as the first stream.  With Media Player Classic and Media Player Classic Home Cinema it’s easy to select the audio stream.  Also, Windows Media Player will not render AC3 pass-through streams, it will just pass them through the SPDIF/Toslink to your receiver — so you won’t get any sound if you’re trying to play it on your PC.

Don’t delete any of your source material until you are certain that you are happy with the results; and you might want to backup your source material and keep it for six months or so just to be sure (yeah — I know it’s big; but a DVD will fit on a DVD).

Handbrake

Originally posted 2009-12-17 01:00:07.

Usability Summary

I think I can sum up the real problem with Linux or any open source system where there’s no real usability mandate…

Developers make arbitrary decisions that suit their needs without any regard to how others will view their decisions or even figure out how to use what they do… the real difference between Windows and OS-X and Linux is that two of those are the cooperative efforts of “experts” who try very hard to address the needs of a target audience who wouldn’t be capable of writing their own operating system.

And, of course, with something like Linux it’s geometrically worse than most open source software since any given Linux is the culmination of hundreds of separate open source modules put together in a completely arbitrary fashion.

It really is funny that what I’ve been describing as a lack of cohesiveness is layered; and I suspect no matter what the intentions of a single developer to try and wrap it inside a nice pretty shell that gives a forward facing pretense of a system that was planned and targeted for productivity, the ugly truth of how much a patch work it is will show through… and we can look back on early versions of Windows and MacOS and see just that… it’s really only been within the last five or six years that those systems have risen to the point that they are in fact fairly cohesive, and designed to be tools for people to solve problems with; not projects for people to build for the sole purpose of developing a life of their own.

Without some unifying direction, the only Linux I can see suceeding is Android; and that my friends is likely to become a collection of closed source tools running on top of an open source kernel.  Trust me, you haven’t seen an evil empire until Google gets on your desktop, phone, settop box, etc…

Originally posted 2010-01-11 01:00:10.

Linux on the desktop

I’ve been experimenting with Linux as a server for several months now; and I have to say for the price it’s a clear winner over Microsoft Windows Server 2008.

Other than desktop search, Linux has been a clear winner across the board.  Network file sharing, application services, etc all seem to work, and work well.  Plus with the webmin GUI for managing the server, it’s extremely easy — easier in fact that figuring out where to go to do the task at hand in Windows Server 2008.

With my success using Linux as a server, I have decided (once again) to investigate Linux as a desktop replacement for Windows… after all, how much does one normally do with a desktop?

I experimented briefly with Ubuntu on a laptop when I was cloning the drive in it, but I didn’t put it through exhaustive paces (I was quite impressed that Ubuntu auto-magically installed drivers for all the hardware in the notebook; though that feat was no better than Windows 7).

I need to go over my requirements a few more times before I start the test, but what I believe is important is:

  • Hardware support; including multiple displays, scanners, web cams, etc
  • Office (which OpenOffice will work the same as it has been on Windows)
  • Financial Management (I guess I’ll have to move over to MoneyDance; it’s not free, but it’s fairly well thought out)
  • Media Playback (VLC runs on Linux just like Windows, plus there are a number of media players I’ll take a look at)
  • DVD RIPping (my last try to do that on Linux wasn’t very successful)
  • Video transcoding (I think HandBrake is broken on the current version of Ubuntu — so that might take a little work)

I’ll also evaluate it for ease of use and customization…

The evaluation will be done on an Intel DG45ID motherboard (G45 chipset)with an Intel Core2 E7200 with 4GB DDR2, multiple SATA2 hard drives, SATA DVD-RW, and I’ll test with both a nVidia 9500 and the Intel GMAC controller (X4500HD) running both a 32-bit and 64-bit Ubuntu 10.04LTS distribution.

Let the fun begin!

Originally posted 2010-08-12 02:00:28.

conglomeration

con·glom·er·a·tion (kn-glm-rshn)
n.

    1. The act or process of conglomerating.
    2. The state of being conglomerated.
  1. An accumulation of miscellaneous things.

The American Heritage® Dictionary of the English Language, Fourth Edition copyright ©2000 by Houghton Mifflin Company. Updated in 2009. Published by Houghton Mifflin Company. All rights reserved.


conglomeration [kənˌglɒməˈreɪʃən] n

  1. a conglomerate mass
  2. a mass of miscellaneous things
  3. the act of conglomerating or the state of being conglomerated

Collins English Dictionary – Complete and Unabridged © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003


conglomeration a cluster; things joined into a compact body, coil, or ball.

Examples: conglomeration of buildings, 1858; of chances; of Christian names, 1842; of men, 1866; of sounds, 1626; of threads of silk worms, 1659; of vessels, 1697; of words.

Dictionary of Collective Nouns and Group Terms. Copyright 2008 The Gale Group, Inc. All rights reserved.


The SCO infringement lawsuit over the Unix trademark is over… the Supreme Court has ruled that Novell owns the Unix trademark and copyright, and SCO has no grounds for it’s litigation against.  Just as Microsoft owned and retained the Xenix copyright while SCO distributed that operating system, so Novell retained the Unix copyright while SCO distributed that operating system.

While means, Novell now has a prime asset — and could be ripe for harvesting (that’s a poetic way to say merger, take-over, buy-out).

Which will likely be bad for Linux.

WHAT?

Yep, take a look at what happened when Oracle purchased Sun (one of the largest companies supporting Open Source innovation in Linux, virtualization, etc) there’s definitely movement in Oracle to retract from the Open Source and free (free – like free beer) software efforts that Sun was firmly behind.

Consider what happens if a company acquires Novell and uses the SystemV license from Novell to market a closed source operating system, and discontinues work on Suse; or at minimum decides it doesn’t distributed Suse for free (free – like free beer).

“Live free or die” might become a fading memory.

Originally posted 2010-06-05 02:00:18.