Entries Tagged as 'Linux'

Ubuntu – Disk Utility

When you install Ubuntu 10.04 Desktop, the default menu item for Disk Utility isn’t extremely useful; after all, it’s on the System->Administration menu, so you would assume that it’s meant to administer the machine, not just view the disk configuration.

What I’m alluding to is that by default Disk Utility (/usr/bin/palimpsest) is not run with elevated privileges (as super-user), but rather as the current user — which if you’re doing as you should be, that’s means you won’t be able to effect any changes, and Disk Utility will probably end up being a waste of time and effort.

To correct this problem all you need do is modify the menu item which launches Disk Utility to elevate your privileges before launching (using gksu) — that, of course, assumes that you’re permitted to elevate your privileges.

To do add privilege elevation to disk utility:

  1. Right click your mouse on the menu bar along the top (right on system is good) and select ‘edit menu items’
  2. Navigate down to ‘administration’ and select it in the left pane
    Select ‘disk utility’ in the right pane
  3. Select ‘properties’ in the buttons on the right
  4. Under ‘command’ prefix it with ‘gksu’ or substitute ‘gksu /usr/bin/palimpsest’ (putting the entire path there)
  5. Then click ‘close’ and ‘close’ again…

Originally posted 2010-06-27 02:00:33.

Linux Server

I’ve been experimenting with a Linux server solution for the past couple months — I was prompted to look at this when my system disk failed in a Windows Server 2008 machine.

First, I’m amazed that after all these years Microsoft doesn’t have a standard module for monitoring the health of a system — at the SMART from disk drives.

I do have an Acronis image of the server from when I first installed it, but it would be a pain to reconfigure everything on that image to be as it was — and I guess I just haven’t been that happy with Windows Server 2008.

I personally find Windows Server 2008 needlessly complicated.

I’m not even going to start ranting on Hyper-V (I’ve done that enough, comparing it head-to-head with other technology… all I will say is it’s a good thing their big competitor is Vmware, or else Microsoft would really have to worry about having such a pathetic virtualization offering).

With a Linux distribution it’s a very simple thing to install a basic server. I actually tried Ubuntu, Centos, and Fedora. I also looked at the Xen distribution as well, but that wasn’t really of interest for a general purpose server.

Personally I found Centos (think Red Hat) to be a little too conservative on their releases/features; I found Fedora to be a little too bleeding edge on their releases/features (plus there’s no long term support commitment); so I was really just left with Ubuntu.

I didn’t really see any reason to look exhaustively at every Debian based distribution — Ubuntu was, in my mind, the best choice of that family; and I didn’t want to look at any distribution that wasn’t available at no cost, nor any distribution that didn’t have a good, stable track record.

With Ubuntu 10.04 LTS (10.04 is a Long Term Support release – which makes it a very good choice to build a server on) you could choose the Desktop or the Server edition — the main difference with the Server verses the Desktop is that the server does not install the XServer and graphical desktop components (you can add them).

The machine I was installing on had plenty of memory and processor to support a GUI, and I saw no reason not to install the Desktop version (I did try out the server version on a couple installs — and perhaps if you have an older machine or a machine with very limited memory or a machine that will be taxed to it’s limits or a machine that you want the absolute smallest attack surface you’d want desktop — though almost all those requirements would probably make me shift to Centos rather than Ubuntu).

My requirements were fairly simple — I wanted to replace the failed Windows 2008 Server with a machine that could perform my DNS, DHCP, web server, file store (home directories — served via CIFS/Samba), and active P2P downloads.

Additionally, the server would have to have fault-tolerate file systems (as did the Windows server).

Originally my testing focused on just making sure all the basic components worked, and worked reasonably well.

Then I moved on to getting all the tools I had written working (I converted all the C# code to PHP).

My final phase involved evaluating fault tolerant options. Initially I’d just used the LSI 150-4 RAID controller I had in the Windows Server 2008 (Linux supported it with no real issues — except that Linux was not able to monitor the health of the drives or the array).

I didn’t really see much need to use RAID5 as I had done with Windows Server 2008; so I concentrated on just doing RAID1 (mirroring) — I tried basic mirrors just using md, as well as using lvm (over md).

My feelings were that lvm added an unnecessary level of complexity on a standalone server (that isn’t to say that lvm doesn’t have feature that some individuals might want or need). So my tests focused primarily on just simple mirrors using md.

I tested performance of my LSI 150-4 RAID5 SATA1 PCI controller (with four SATA2 drives) against RAID1 SATA2 using Intel ICH9 and SiI3132 controllers (with pairs of SATA1 or SATA2 drives). I’d expected that the LSI 150-4 would outperform the md mirror with SATA1 drives on both read and write, but that with SATA2 drives I’d see better reads on the md mirror.

I was wrong.

The md mirrors actually performed better across the board (though negligibly better with SATA1 drives attached) — and the amazing thing was that CPU utilization was extremely low.

Now, let me underscore here that the LSI 150-4 controller is a PCI-X (64-bit) controller that I’m running as PCI (32-bit); and the LSI 150-4 represents technology that’s about six years old… and the LSI 150-4 controller is limited to SATA1 with no command set enhancements.

So this comparison wouldn’t hold true if I were testing md mirrors against a modern hardware RAID controller — plus the other RAID controllers I have are SAS/SATA2 PCIe and have eight and sixteen channels (more spindles means more performance).

Also, I haven’t tested md RAID5 performance at all.

My findings at present are that you can build a fairly high performance Linux based server for a small investment. You don’t need really high end hardware, you don’t need to invest in hardware RAID controllers, and you don’t need to buy software licenses — you can effectively run a small business or home office environment with confidence.

Originally posted 2010-06-24 02:00:09.

Ubuntu – RAID Creation

I think learning how to use mdadm (/sbin/mdadm) is a good idea, but in Ubuntu Desktop you can use Disk Utility (/usr/bin/palimpsest) to create most any of your RAID (“multiple disk”) configurations.

In Disk Utility, just access “File->Create->Raid Array…” on the menu and choose the options.  Before doing that, you might want to clear off the drives you’re going to use (I generally create a fresh GTP partition to insure the drive is ready to be used as a component of the RAID array).

Once you’ve created the container with Disk Utility; you can even format it with a file system; however, you will still need to manually add the entries to /etc/mdadm/mdadm.conf and /etc/fstab.

One other minor issue I noticed.

I gave my multiple disk containers names (mirror00, mirror01, …) and Disk Utility will show them mounted on device /dev/md/mirror00 — in point of fact, you want to use device names like /dev/md0, /dev/md1, … in the /etc/mdadm/mdadm.conf file.  Also, once again, I highly recommend that you use the UUID for the array configuration (in mdadm.conf) and for the file system (in fstab).

Originally posted 2010-07-12 02:00:33.

Linux File System Fragmentation

I’ve always found it hilarious that *nix bigots (particularly Linux bigots) asserted that their file systems, unlike those found in Windows, didn’t fragment.

HA HA

Obviously most anyone who would make that assertion really doesn’t know anything about file systems or Windows.

It’s true that back in the ancient times of Windows when all you had was FAT or FAT32 that fragmentation was a real problem; but as of the introduction for HPFS in OS/2 and then NTFS in Windows NT fragmentation in a Windows system was on par with fragmentation in a *nix system.

Though you’ll recall that in Windows, even with NTFS, defragmentation was possible and tools to accomplish it were readily available (like included with the operating system).

Ext2, Ext3, Ext4 — and most any other file system known to man might (like NTFS) attempt to prevent file system fragmentation, but it happens — and over time it can negatively impact performance.

Interesting enough, with Ext4 there appears to be fewer *nix people in that great river in Egypt — d Nile… or denial as it were.

Ext4 is a very advanced file system; and most every trick in the book to boost performance and prevent fragmentation is includes — along with the potential for defragmentation.  The tool e4defrag will allow for the defragmentation of single files or entire file systems — though it’s not quite ready… still a few more kernel issues to be worked out to allow it to defragment a live file system.

With Ext4 as with NTFS one way you can defragment a file is copy it, the file system itself will attempt to locate an area of the disk that can hold the file in continuous allocation unites — but, of course, the file system’s performance can often be increased to coalescing the free space, or at least coalescing free space that is likely too small to hold a file.

As I said when I started; I’ve always found it hilarious that *nix bigots often don’t have a very good understanding of the technical limitations and strengths of various pieces of an operating system… but let me underscore just because people don’t always know what they’re talking about doesn’t necessarily mean that the solution they’re evangelizing might not be something that should be considered.

Originally posted 2010-06-03 02:00:06.

Ubuntu – Desktop Search

Microsoft has really shown the power of desktop search in Vista and Windows 7; their newest Desktop Search Engine works, and works well… so in my quest to migrate over to Linux I wanted to have the ability to have both a server style as well as a desktop style search.

So the quest begun… and it was as short a quest as marching on the top of a butte.

I started by reviewing what I could find on the major contenders (just do an Internet search, and you’ll only find about half a dozen reasonable articles comparing the various desktop search solutions for Linux)… which were few enough it didn’t take very long (alphabetical):

My metrics to evaluate a desktop search solutions would focus on the following point:

  • ease of installation, configuration, maintenance
  • search speed
  • search accuracy
  • ease of access to search (applet, web, participation in Windows search)
  • resource utilization (cpu and memory on indexing and searching)

I immediately passed on Google Desktop Search; I have no desire for Google to have more access to information about me; and I’ve tried it before in virtual machines and didn’t think very much of it.

Begal

I first tried Beagle; it sounded like the most promising of all the search engines, and Novel was one of the developers behind it so I figured it would be a stable baseline.

It was easy to install and configure (the package manager did most of the work); and I could use the the search application or the web search, I had to enable it using beagle-config:

beagle-config Networking WebInterface true

And then I could just goto port 4000 (either locally or remotely).

I immediately did a test search; nothing came back.  Wow, how disappointing — several hundred documents in my home folder should have matched.  I waited and tried again — still nothing.

While I liked what I saw, a search engine that couldn’t return reasonable results to a simple query (at all) was just not going to work for me… and since Begal isn’t actively developed any longer, I’m not going to hold out for them to fix a “minor” issue like this.

Tracker

My next choice to experiment with was Tracker; you couldn’t ask for an easier desktop search to experiment with on Ubuntu — it seems to be the “default”.

One thing that’s important to mention — you’ll have to enable the indexer (per-user), it’s disabled by default.  Just use the configuration tool (you might need to install an additional package):

tracker-preferences

Same test, but instantly I got about a dozen documents returned, and additional documents started to appear every few seconds.  I could live with this; after all I figured it would take a little while to totally index my home directory (I had rsync’d a copy of all my documents, emails, pictures, etc from my Windows 2008 server to test with, so there was a great deal of information for the indexer to handle).

The big problem with Tracker was there was no web interface that I could find (yes, I’m sure I could write my own web interface; but then again, I could just write my own search engine).

Strigi

On to Strigi — straight forward to install, and easy to use… but it didn’t seem to give me the results I’d gotten quickly with Tracker (though better than Beagle), and it seemed to be limited to only ten results (WTF?).

I honestly didn’t even look for a web interface for Strigi — it was way too much a disappointment (in fact, I think I’d rather have put more time into Beagle to figure out why I wasn’t getting search results that work with Strigi).

Recoll

My last test was with Recoll; and while it looked promising from all that I read, but everyone seemed to indicate it was difficult to install and that you needed to build it from source.

Well, there’s an Ubuntu package for Recoll — so it’s just as easy to install; it just was a waste of effort to install.

I launched the recoll application, and typed a query in — no results came back, but numerous errors were printed in my terminal window.  I checked the preferences, and made a couple minor changes — ran the search query again — got a segmentation fault, and called it a done deal.

It looked to me from the size of the database files that Recoll had indexed quite a bit of my folder; why it wouldn’t give me any search results (and seg faulted) was beyond me — but it certainly was something I’d seen before with Linux based desktop search.

Conclusions

My biggest conclusion was that Desktop Search on Linux just isn’t really something that’s ready for prime time.  It’s a joke — a horrible joke.

Of the search engines I tried, only Tracker worked reasonably well, and it has no web interface, nor does it participate in a Windows search query (SMB2 feature which directs the server to perform the search when querying against a remote file share).

I’ve been vocal in my past that Linux fails as a Desktop because of the lack of a cohesive experience; but it appears that Desktop Search (or search in general) is a failing of Linux as both a Desktop and a Server — and clearly a reason why choosing Windows Server 2008 is the only reasonable choice for businesses.

The only upside to this evaluation was that it took less time to do than to read about or write up!

Originally posted 2010-07-06 02:00:58.

Anti-Malware Programs

First, malware is a reality and no operating system is immune to it.

Malware is most common on operating systems that are prevalent (no reason to target 1% of the installed base now is there); so an obscure operating system is far less likely to be the target of malware.

Malware is most common on popular operating systems that generally do not require elevation of privileges to install (OS-X, *nix, Vista, and Server 2008 all require that a user elevate their privileges before installing software, even if they have rights to administer the machine).

The reality is that even a seasoned computer professional can be “tricked” into installing malware; and the only safe computer is a computer that’s disconnected from the rest the world and doesn’t have any way to get new software onto it (that would probably be a fairly useless computer).

Beyond exercising common sense, just not installing software you don’t need or are unsure of (remember, you can install and test software in a virtual machine using UNDO disks before you commit it to a real machine), and using a hardware “firewall” (residential gateway devices should be fine as long as you change the default password, disable WAN administration, and use WPA or WPA2 on your wireless network) between you and your high-speed internet connection; using anti-malware software is your best line of defense.

There are a lot of choices out there, but one of the best you’ll find is Avast! — there’s a free edition for non-commercial use, and of course several commercial version for workstations and servers.

My experience is that on all but the slowest computers Avast! performs well, and catches more malware than most any of the big-name commercial solutions.

For slower computers that you need mal-ware protection for, consider AVG (they also have a free version for non-commercial use); I don’t find it quite as good as Avast! at stopping as wide a range of threats, but it’s much lower on resource demands (and that helps to keep your legacy machine usable).

Originally posted 2009-01-02 12:00:01.

GIMP

GIMP is an acronym for GNU Image Manipulation Program. It is a freely distributed program for such tasks as photo retouching, image composition and image authoring.

It has many capabilities. It can be used as a simple paint program, an expert quality photo retouching program, an online batch processing system, a mass production image renderer, an image format converter, etc.

GIMP is expandable and extensible. It is designed to be augmented with plug-ins and extensions to do just about anything. The advanced scripting interface allows everything from the simplest task to the most complex image manipulation procedures to be easily scripted.

That’s what the GIMP site says; but what GIMP is is a free Open Source alternative to programs like Adobe Photoshop and Corel Paint Shop Pro that runs on Linux, OS-X, and Windows.

GIMP is reasonably easy to use, powerful, and rock solid.

If you understand the principles of image/photo editing you’ll be a pro at using GIMP in no time — far easier to use than Photoshop, far more functional than Paint Shop Pro.  And it’s free — totally free — just download it an install it.  There’s lots of plug-ins for it as well (so make sure you take a look at some of those add ins).  Be sure and review the online documentation, tutorials, and FAQ; plus there are a number of well written books on GIMP available for purchase.

GIMP.org

Originally posted 2010-03-08 02:00:45.

Linux usability

While doing my preliminary look at usability in several Linux distributions that had adopted a Mac-ish paradigm I decided I needed to lay several ground rules to fully review them.

First, I decided that using a virtual machine was fine for getting intial impressions, but that just wasn’t going to be acceptable for a complete review… and I also decide that doing a review on only one piece of hardware wasn’t going to give me a very good idea of what problems a user might see related to the computer.

It’s certainly no problem for me to find a computer or two to install these Linux distributions on and run them through their paces; however, I don’t have any “low-end” hardware, so my tests are going to use fairly current generations of hardware, so be aware that my impressions might not match your impression if you’re planning on running these on hardware that is more than a couple years old (and by a couple year old I mean hardware who’s components were current no more than two years ago).

I’ll perform the following:

  1. Install the distribution (without requiring any settings manually)
  2. Update itself (and applications)
  3. Start up, shut down, log on, log off
  4. Browse the web (that’s a given)
  5. Read email (including setting up the email program)
  6. Play a CD (music)
  7. Play several music files
  8. Play a DVD (movie)
  9. Play several video files
  10. Edit a WYSIWYG document
  11. Edit an image
  12. View and print a PDF
  13. Access a thumb drive
  14. Access files stored on a network device
  15. Access secure digital media (though a USB card reader)
  16. Scan an image
  17. Open a ZIP archive; create a ZIP archive
  18. Email an attachment, recover an email attachment
  19. Install a new (and useful) application
  20. Alter the appearance (preferably using a theme)

Beyond these simple tests I’ll try and appraise the simplicity, clarity, and ease of use of the interface… I’ll also comment on the overall appearance, the look and feel.

Originally posted 2010-01-08 01:00:19.

Linux BitTorrent Clients – Follow-Up

I’ve been using several Linux bit torrent clients fairly heavily for the past week or so, and I have a few new comments about each of the “contenders” — below I’ve ordered them as I would recommend using them.

KTorrent · KTorrent might be a little “fat”, but it works, and it works very well — particularly when dealing with a large number of torrents simultaneously.  This is my pick.

TorrentFlux · TorrentFlux is probably the best solution you’ll find for a torrent server.  Simply said, it works fine (though I don’t know that I’ll continue to use it, simply because it doesn’t seem to be being improved, and it’s far from perfection).

Transmission · Transmission is simple, and that simplicity seems to pay off — it works, it works well.

qBittorrent · qBittorrent works fairly well for a small number of simultaneous torrents; but if you want to download large numbers of torrents or seed large numbers of torrents stay away from this one — it actually crashes, and unless your goal is just to watch the integrity of your torrents be checked and over and over you can do much better.

Deluge · Deluge was what I really wanted to like; and it seemed to work, but it has two major problems — it doesn’t handle large numbers of torrents well, and it doesn’t properly handle port forwarding (either through UPnP / NAT-PMP or when you try and set the port forwarding manually).  We’ll just leave it at it has issues (that apparently are fairly well known) and the progress on it is glacial in it’s pace.

Moving torrents from one client to another isn’t all that hard to do, a little time consuming maybe… but once you figure out how to do it, and let your data files re-check, you’ll be on your way.

My experience over the past week reminds me that you can do your diligence by researching every fact and figure about a program all you like; but until you put it through the paces you just won’t know.

NOTES: My test included about 550 torrents totaling just under half a terabyte in total size.  I required that ports be forwarded through a firewall properly (either via UPnP, NAT-PMP, or by hand), and that I be able to control the total number of active torrents (preferably with control over uploads and downloads as well), and be able to restrict the bandwidth (a scheduler was a nice touch, but not a requirement).

Originally posted 2010-08-25 02:00:30.

OpenOffice

You need to find a suite of office applications?

The place to start is OpenOffice.

OpenOffice has a long heritage, and the software was designed and built to be a cohesive set of applications (not a collection of various applications that did different parts of a job).

OpenOffice is written in Java, and if you’re running Windows you can download and install a version of OpenOffice that includes the Java Run-time Environment (JRE); on most other operating system it will already be installed.

OpenOffice is able to import and export most document formats you’re used to, plus it can use it’s own format (which is an ISO standard), and creating PDFs of the output is a snap.

Writer — if you’re a Windows person you’d probably think of this as “Word”.  It’s an excellent word processor, and it well suited for virtually any task you might have.  There are quirks (but hey, they are quirks in “Word” as well, and they randomly change from version to version), but overall it’s intuitive and easy to use.  Plus there’s good documentation available to answer most any question you might have.

Calc — if you’re a Windows person you’d probably think of this as “Excel”.  I’m not a big spread sheet user, but I can tell you that all the fairly simple tasks that I used “Excel” for Calc did without a problem; and it imported the spread sheets, converted them it it’s format, and other than a very slight print alignment issue on one they were perfect (and much smaller and faster).  From my experience and what I’ve read you shouldn’t have any issue with Calc for all your spread sheet needs.

Impress — if you’re a Windows person you’d probably think of this as “PowerPoint”.  It seems to work, has all the annoying slide ware capabilities a marketing person might want.

Draw — if you’re a Windows person you might think of this as “Visio” or perhaps “Illustrator”.  There’s not an exact equivalent for this tool.  But it’s useful to do diagrams, drawings, etc.  But don’t confuse it with “PhotoShop” — that’s not really an office tool now is it?

Base — if you’re a Windows person you’d probably think of this as “Access”.  Works well and works with most any database you might have.

There is no email / calendar / contact replacement in OpenOffice, nor is there a “OneNote” replacement.  I don’t know that I feel email / calendar / contacts really belong in an office suite, but I certainly have gotten accustom to being able to collect a bunch of data together in one place with automatic references from where it came — so I’d love to see something like “OneNote” added to OpenOffice.

If you’re a casual user, a home user, a student, or a small business user (without restrictive corporate policies) you’ll find that OpenOffice will solve most all your needs.  Try it… save a little cash.

OpenOffice.org

Originally posted 2010-01-19 01:00:42.