Entries Tagged as 'Linux'

Anti-Malware Programs

First, malware is a reality and no operating system is immune to it.

Malware is most common on operating systems that are prevalent (no reason to target 1% of the installed base now is there); so an obscure operating system is far less likely to be the target of malware.

Malware is most common on popular operating systems that generally do not require elevation of privileges to install (OS-X, *nix, Vista, and Server 2008 all require that a user elevate their privileges before installing software, even if they have rights to administer the machine).

The reality is that even a seasoned computer professional can be “tricked” into installing malware; and the only safe computer is a computer that’s disconnected from the rest the world and doesn’t have any way to get new software onto it (that would probably be a fairly useless computer).

Beyond exercising common sense, just not installing software you don’t need or are unsure of (remember, you can install and test software in a virtual machine using UNDO disks before you commit it to a real machine), and using a hardware “firewall” (residential gateway devices should be fine as long as you change the default password, disable WAN administration, and use WPA or WPA2 on your wireless network) between you and your high-speed internet connection; using anti-malware software is your best line of defense.

There are a lot of choices out there, but one of the best you’ll find is Avast! — there’s a free edition for non-commercial use, and of course several commercial version for workstations and servers.

My experience is that on all but the slowest computers Avast! performs well, and catches more malware than most any of the big-name commercial solutions.

For slower computers that you need mal-ware protection for, consider AVG (they also have a free version for non-commercial use); I don’t find it quite as good as Avast! at stopping as wide a range of threats, but it’s much lower on resource demands (and that helps to keep your legacy machine usable).

Originally posted 2009-01-02 12:00:01.

Compression

There are two distinct features that Windows Server 2008 outshines Linux on; and both are centric on compression.

For a very long time Microsoft has supported transparent compression as a part of NTFS; you can designate on a file-by-file or directory level what parts of the file system are compressed by the operating system (applications need do nothing to use compressed files).  This feature was probably originally intended to save the disk foot print of seldom used files; however, with the explosive growth in computing power what’s happened is that compressed files can often be read and decompressed much faster from a disk than a uncompressed file can.  Of course, if you’re modifying say a byte or two in the middle of a compressed file over and over, it might not be a good idea to mark it as compressed — but if you’re basically reading the file sequentially then compression may dramatically increase the overall performance of the system.

The reason for this increase is easy to understand; many files can be compressed ten to one (or better), that means each disk read is reading effectively ten times the information, and for a modern, multi-core, single-instruction/multiple-data capable processor to decompress this stream of data put no appreciable burden on the processing unit(s).

Recently, with SMBv2, Microsoft has expanded the file sharing protocol to be able to transport a compressed data stream, or even a differential data stream (Remote Differential Compression – RDC) rather than necessarily having to send every byte of the file.  This also has the effect of often greatly enhancing the effect data rate, since once again a modern, multi-core, single-instruction/multiple-data capable processor can compress (and decompress) a data stream at a much higher rate than most any network fabric can transmit the data (the exception would be 10G).  In cases of highly constrained networks, or networks with extremely high error rates the increase in effect through put could be staggering.

Unfortunately, Linux lags behind in both areas.

Ext4 does not include transparent compression; and currently no implementation of SMBv2 is available for Linux servers (or clients).

While there’s no question, what-so-ever, that the initial cost of a high performance server is less if Linux is chosen as the operating system, the “hidden” costs of lacking compression may make the total cost of ownership harder to determine.

Supporting transparent compression in a file system is merely a design criteria for a new file system (say Ext5 or Ext4.1); however, supporting SMBv2 will be much more difficult since (unlike SMBv1) it is a closed/proprietary file sharing protocol.

Originally posted 2010-07-11 02:00:49.

Operating Systems

I have computers running Windows (most flavors), OS-X, Linux, and BSD (or we could generically call those *nix) — and have had computers running SunOS, Solaris, and OSF… so I consider myself well versed in operating systems from a user standpoint (and a developer standpoint as well).

Recently I took a look at how practical each of the “popular” choices were as a desktop environment for what I would consider an average user; and I set the goals of an average user to be:

  • Email
  • Managing contact and schedules
  • Browsing the internet
  • Office tasks (word processing and simple spread sheets)
  • Multimedia (music and movies)
  • Managing finances

And I looked at Windows (Vista Ultimate, but for this much would apply to XP as well), OS-X, and Ubuntu Linux (I felt that was a good distribution for an average user).

On email, managing contacts and schedules, browsing the internet, and office tasks I would say that all three of the operating systems were reasonably equal… very few real differences in capabilities or ease of use (both Vista and OS-X have option for commercial as well as free software; on Ubuntu only free software was used).  For multimedia both Vista and OS-X were far better than Ubuntu (yes, Ubuntu could do most everything the other two could do, but the software was very piece meal, and didn’t “fit” well with the rest of the system).  For managing finances all of them had non-commercial and commercial solutions and depending on your needs whether any or all of them would be sufficient.

Vista

Microsoft’s current Windows operating system for desktop PCs.  Vista is well suited for most tasks an average user is likely to do.  Since the cost of Vista is included in most PC purchases only upgraded expenses need to be considered (this isn’t true if you’re building your own PC from parts — but if you’re recycling an old PC it may already have a license for Windows).  The cost of a PC does not generally include an office suite.  There’s a host of free software that you can use if you elect no to purchase additional software from Microsoft.

 

OS-X

Apple’s current operating system for Macs.  OS-X is well suited for most tasks an average user is likely to do.  Since the cost of OS-X is included in Mac purchases only upgrade expenses need to be considered.  The cost of the mac might include iLife, but not iWorks.  There’s a host of free software that you can use if you elect not to purchase additional software from Apple.

 

Ubuntu

Provided you have a way to download Ubuntu and burn it onto installation media (CD) there’s no cost in acquiring it.  If you have very old hardware using Ubuntu (or a lighter weigth Linux) might be the only option you really have — but my comparison here is not based on what’s cheapest, it’s what’s reasonable.  Most all of what you will need will be installed with the operating system.  There’s a host of free software that you can use by simply downloading it.

 

Observations:

  • Apples are only easier to use if you’re used to Apples — like all tools, human beings have no inherent ability to know how to use them.  Regardless of the operating system you choose you will need to invest a little time into learning how to use it.  How much time you invest will be determined by the relative sophistication of what you’re trying to do, and what kind of background in computers you have.
  • You’ll find that both Vista and OS-X will provide an inexperienced user with much more “hand holding” than Ubuntu.  But that said, one of the first things you need to get proficient at is searching the internet for “answers”.
  • Pretty much all the annoyances people gripe about are universal in all three of the operating systems (it’s comical that Apple had a whole series of advertisements about Vista annoyances — annoyances their own operating system had had for years for the most part).  There are often system settings that can turn off many of these annoyances, but in fact they are present for a reason — and while you’re learning I recommend you just learn to deal with the annoyances and don’t change system settings without good cause.
  • You’re going to find making changes to many settings on Ubuntu (or any Linux) much more difficult than either Vista or OS-X.
  • You’re going to find that things are far more cohesive on both Vista and OS-X; with Ubuntu it becomes fairly obvious quickly that you’re using a collection of dis-associated widgets and parts.

 

Conclusions:

For most computer users I’d recommend that you consider using either Vista or OS-X for your computing needs.  Leave Ubuntu (and other *nix based operating systems) to more experienced computer users who have a “need” for it.  I suspect that we’ll see improvements in the cohesiveness of non-commercial operating system, but for the moment they just aren’t ready for prime time.

Originally posted 2008-12-26 12:00:38.

Elive – Luxury Linux

I’ll have to start my post off with what may seam like a very unfair comment; and it may be.

I’ll prefix this with I don’t ever feel comfortable with individuals or companies who try and charge for Open Source software when they don’t offer anything tangible for that money, and they don’t allow (and encourage) you to try out what you’re going paying for before you are asked to pay for it.

Elive falls squarely into this category.

You cannot download a “stable” version of Elive unless you make some donation (I believe $10 is the minimum donation) from the publishers site (you certainly can find torrents and ftp links to download it from other sites if you’re willing to put a few minutes into it).

Strictly my opinion; but I suspect the publisher realizes that no one would ever pay him for a “stable” version of Elive because what he passes off as stable isn’t.

When Elive boots, it’s striking, and all the applications that are installed with it seem to work nicely.  The interface, while not 100% Mac-like, is intuitive and easy to use…

So why start with such a strong negative stand?

Easy, Elive just isn’t stable.  It’s mostly form with little function.

What’s included on the CD seems to work fairly well, but start updating components or installing additional software (the VirtualBox guest additions started me on the road to ruin) and then the trouble starts… laughingly you have an environment with the stability of Windows 9x on junker hardware rather than OS-X (or Linux).

I suspect that the failing of Elive is that it isn’t a collaborative project of many people; nor is it a commercial venture from a publisher with the resources to adequately test it.

I simply wouldn’t pursue it the way it’s being pursued — but I like quality, and would simply not be comfortable asking for donations from people who will probably end up not being able to use the version they donated to (and there’s no mention that you get upgrades for life for free or only need donate again when you feel you’ve gotten something of substance).

My advice… look at the free “unstable” build, play with it, make it do what you want it to do — when it crashes move on; don’t expect a great deal more from the “stable”.

Hopefully, though, others will look at Elive and see the potential and we’ll see another distribution that is every bit as flashy and way more stable.

Elive

Originally posted 2010-01-04 01:00:17.

Ubuntu – Desktop Search

Microsoft has really shown the power of desktop search in Vista and Windows 7; their newest Desktop Search Engine works, and works well… so in my quest to migrate over to Linux I wanted to have the ability to have both a server style as well as a desktop style search.

So the quest begun… and it was as short a quest as marching on the top of a butte.

I started by reviewing what I could find on the major contenders (just do an Internet search, and you’ll only find about half a dozen reasonable articles comparing the various desktop search solutions for Linux)… which were few enough it didn’t take very long (alphabetical):

My metrics to evaluate a desktop search solutions would focus on the following point:

  • ease of installation, configuration, maintenance
  • search speed
  • search accuracy
  • ease of access to search (applet, web, participation in Windows search)
  • resource utilization (cpu and memory on indexing and searching)

I immediately passed on Google Desktop Search; I have no desire for Google to have more access to information about me; and I’ve tried it before in virtual machines and didn’t think very much of it.

Begal

I first tried Beagle; it sounded like the most promising of all the search engines, and Novel was one of the developers behind it so I figured it would be a stable baseline.

It was easy to install and configure (the package manager did most of the work); and I could use the the search application or the web search, I had to enable it using beagle-config:

beagle-config Networking WebInterface true

And then I could just goto port 4000 (either locally or remotely).

I immediately did a test search; nothing came back.  Wow, how disappointing — several hundred documents in my home folder should have matched.  I waited and tried again — still nothing.

While I liked what I saw, a search engine that couldn’t return reasonable results to a simple query (at all) was just not going to work for me… and since Begal isn’t actively developed any longer, I’m not going to hold out for them to fix a “minor” issue like this.

Tracker

My next choice to experiment with was Tracker; you couldn’t ask for an easier desktop search to experiment with on Ubuntu — it seems to be the “default”.

One thing that’s important to mention — you’ll have to enable the indexer (per-user), it’s disabled by default.  Just use the configuration tool (you might need to install an additional package):

tracker-preferences

Same test, but instantly I got about a dozen documents returned, and additional documents started to appear every few seconds.  I could live with this; after all I figured it would take a little while to totally index my home directory (I had rsync’d a copy of all my documents, emails, pictures, etc from my Windows 2008 server to test with, so there was a great deal of information for the indexer to handle).

The big problem with Tracker was there was no web interface that I could find (yes, I’m sure I could write my own web interface; but then again, I could just write my own search engine).

Strigi

On to Strigi — straight forward to install, and easy to use… but it didn’t seem to give me the results I’d gotten quickly with Tracker (though better than Beagle), and it seemed to be limited to only ten results (WTF?).

I honestly didn’t even look for a web interface for Strigi — it was way too much a disappointment (in fact, I think I’d rather have put more time into Beagle to figure out why I wasn’t getting search results that work with Strigi).

Recoll

My last test was with Recoll; and while it looked promising from all that I read, but everyone seemed to indicate it was difficult to install and that you needed to build it from source.

Well, there’s an Ubuntu package for Recoll — so it’s just as easy to install; it just was a waste of effort to install.

I launched the recoll application, and typed a query in — no results came back, but numerous errors were printed in my terminal window.  I checked the preferences, and made a couple minor changes — ran the search query again — got a segmentation fault, and called it a done deal.

It looked to me from the size of the database files that Recoll had indexed quite a bit of my folder; why it wouldn’t give me any search results (and seg faulted) was beyond me — but it certainly was something I’d seen before with Linux based desktop search.

Conclusions

My biggest conclusion was that Desktop Search on Linux just isn’t really something that’s ready for prime time.  It’s a joke — a horrible joke.

Of the search engines I tried, only Tracker worked reasonably well, and it has no web interface, nor does it participate in a Windows search query (SMB2 feature which directs the server to perform the search when querying against a remote file share).

I’ve been vocal in my past that Linux fails as a Desktop because of the lack of a cohesive experience; but it appears that Desktop Search (or search in general) is a failing of Linux as both a Desktop and a Server — and clearly a reason why choosing Windows Server 2008 is the only reasonable choice for businesses.

The only upside to this evaluation was that it took less time to do than to read about or write up!

Originally posted 2010-07-06 02:00:58.

Dynamic IP Filtering (Black Lists)

There are a number of reasons why you might want to use a dynamic black list of IP addresses to prevent your computer from connecting to or being connect to by users on the Internet who might not have your best interests at heart…

Below are three different dynamic IP filtering solutions for various operating systems; each of them are open source, have easy to use GUIs, and use the same filter list formats (and will download those lists from a URL or load them from a file).

You can read a great deal more about each program and the concepts of IP blocking on the web pages associated with each.

Originally posted 2010-08-17 02:00:55.

Defragmenting

There are many people out there that say that *nix and Mac file systems don’t fragment — only Windows does.

They’re dead wrong.

[I know I’ve said this before, but it’s worth saying again]

All three file systems (and in Windows we’re talking about NTFS, not FAT) derive from the same basic file system organization, and all three have pretty much the same characteristics (there are differences, but those really have nothing to do with the likelihood of fragmentation).

Fragmentation is just a by-product of the way a file system works.  The file system must make decisions about how to lay files down on the disk, and since it doesn’t have a crystal ball it cannot see the future.  Thus is a file is pinned between two other files and it must grow, the file would either need to be moved (creating an empty spot of a maximum size) or extended in another area (thus being fragmented).

There are various schemes for handling file allocations, but most of them rely on an application that is creating the file giving the operating system (and thus file system) sufficient information on the files maximum size and hints as to whether it is temporary, may grow, etc.

Given that file systems will fragment, the need for defragmentation is real.  Windows recognizes this (mainly because Windows used to use a FAT file system where fragmentation caused severe performance issues).

If you have a *nix or Mac based system, I’m sure you can locate a reasonably good defragmenter (not every one is in denial about the need for periodically defragmenting the system).  If you have  Windows based system you already have a reasonably good defragmenter that came with the systems (a “lite” version of Executive Systems Diskeeper, which now just goes by the name of Diskeeper Corporation).  You can, of course, purchase a number of commercial products, like the full blown Diskeeper, O&O Defrag (my personal favorite), or download a host of free or inexpensive products.

The key to defragmenting your system is knowing when you should invest the time (and wear on your disks).  The most accurate answer would be when system fragmentation reaches a point where it adversely effects performance.  That seems a little vague, but most of the defragmentation tools actually will do an analysis and advise you if they should be run.  Some of them have active defragmentation (but like the file system, they don’t have a crystal ball, and will often cost performance, not enhance it — so I would just say no to active defragmentation).

A good rule of thumb is that right after you install you system, or any time you install major updates or service packs you should defragment your system.  It’s a good idea to clean off temporary files (like your browser cache, etc) before you defragment.  And you might even want to clean off old restore points (if you have them enabled).

There’s certainly no reason to defragment your system daily or weekly; but an occasional night of running your defragmenter of choice will likely decrease boot time and increase overall system performance.

One other little tid-bit — remove your paging file before defragmenting; then after you’re finished, create a new paging file of a fixed size (ie set the minimum and maximum to the same thing).  That way you have a nicely defragmented paging file that will not cause fragmentation or fragment itself (leading to better system performance).  Of course, if your system has enough memory to run without a paging file, you don’t need one at all.

Originally posted 2010-02-21 01:00:20.

Ubuntu – Creating A Disk Mirror

A disk mirror, or RAID1 is a fault tolerant disk configuration where every block of one drive is mirrored on a second drive; this provides the ability to lose one drive (or have damaged sectors on one drive) and still retain data integrity.

RAID1 will have lower write performance than a single drive; but will likely have slightly better read performance than a single drive.  Other types of RAID configurations will have different characteristics; but RAID1 is simple to configure and maintain (and conceptually it’s easy for most anyone to understand the mechanics) and the topic of this article.

Remember, all these commands will need to be executed with elevated privileges (as super-user), so they’ll have to be prefixed with ‘sudo’.

First step, select two disks — preferably identical (but as close to the same size as possible) that don’t have any data on them (or at least doesn’t have any important data on them).  You can use Disk Utility (GUI) or gparted (GUI) or cfdisk (CLI) or fdisk (CLI) to confirm that the disk has no data and change (or create) the partition type to “Linux raid autotected” (type “fd”) — also note the devices that correspond to the drive, they will be needed when building the array.

Check to make sure that mdadm is installed; if not you can use the GUI package manager to download and install it; or simply type:

  • apt-get install mdadm

For this example, we’re going to say the drives were /dev/sde and /dev/sdf.

Create the mirror by executing:

  • mdadm ––create /dev/md0 ––level=1 ––raid-devices=2 /dev/sde1 missing
  • mdadm ––manage ––add /dev/md0 /dev/sdf1

Now you have a mirrored drive, /dev/md0.

At this point you could setup a LVM volume, but we’re going to keep it simple (and for most users, there’s no real advantage to using LVM).

Now you can use Disk Utility to create a partition (I’d recommend a GPT style partition) and format a file system (I’d recommend ext4).

You will want to decide on the mount point

You will probably have to add an entry to /etc/fstab and /etc/mdadm/mdadm.conf if you want the volume mounted automatically at boot (I’d recommend using the UUID rather than the device names).

Here’s an example mdadm.conf entry

  • ARRAY /dev/md0 level=raid1 num-devices=2 UUID=d84d477f:c3bcc681:679ecf21:59e6241a

And here’s an example fstab entry

  • UUID=00586af4-c0e8-479a-9398-3c2fdd2628c4 /mirror ext4 defaults 0 2

You can use mdadm to get the UUID of the mirror (RAID) container

  • mdadm ––examine ––scan

And you can use blkid to get the UUID of the file system

  • blkid

You should probably make sure that you have SMART monitoring installed on your system so that you can monitor the status (and predictive failure) of drives.  To get information on the mirror you can use the Disk Utility (GUI) or just type

  • cat /proc/mdstat

There are many resources on setting mirrors on Linux; for starters you can simply look at the man pages on the mdadm command.

NOTE: This procedure was developed and tested using Ubuntu 10.04 LTS x64 Desktop.

Originally posted 2010-06-28 02:00:37.

Ubuntu – Creating A RAID5 Array

A RAID5 array is a fault tolerant disk configuration which uses a distributed parity block; this provides the ability to lose one drive (or have damaged sectors on one drive) and still retain data integrity.

RAID5 will likely have slightly lower write performance than a single drive; but will likely have significantly better read performance than a single drive. Other types of RAID configurations will have different characteristic.  RAID5 requires a minimum of three drives, and may have as many drives as desires; however, at some point RAID6 with multiple parity blocks should be considered because of the potential of additional drive failure during a rebuild.

The following instructions will illustrate the creation of a RAID5 array with four SATA drives.

Remember, all these commands will need to be executed with elevated privileges (as super-user), so they’ll have to be prefixed with ‘sudo’.

First step, select two disks — preferably identical (but as close to the same size as possible) that don’t have any data on them (or at least doesn’t have any important data on them). You can use Disk Utility (GUI) or gparted (GUI) or cfdisk (CLI) or fdisk (CLI) to confirm that the disk has no data and change (or create) the partition type to “Linux raid autotected” (type “fd”) — also note the devices that correspond to the drive, they will be needed when building the array.

Check to make sure that mdadm is installed; if not you can use the GUI package manager to download and install it; or simply type:

  • apt-get install mdadm

For this example, we’re going to say the drives were /dev/sde /dev/sdf /dev/sdg and /dev/sdh.

Create the RAID5 by executing:

  • mdadm ––create /dev/md1 ––level=5 ––raid-devices=4 /dev/sd{e,f,g,h}1

Now you have a RAID5 fault tolerant drive sub-system, /dev/md1 (the defaults for chunk size, etc are reasonable for general use).

At this point you could setup a LVM volume, but we’re going to keep it simple (and for most users, there’s no real advantage to using LVM).

Now you can use Disk Utility to create a partition (I’d recommend a GPT style partition) and format a file system (I’d recommend ext4).

You will want to decide on the mount point

You will probably have to add an entry to /etc/fstab and /etc/mdadm/mdadm.conf if you want the volume mounted automatically at boot (I’d recommend using the UUID rather than the device names).

Here’s an example mdadm.conf entry

  • ARRAY /dev/md1 level=raid5 num-devices=4 UUID=d84d477f:c3bcc681:679ecf21:59e6241a

And here’s an example fstab entry

  • UUID=00586af4-c0e8-479a-9398-3c2fdd2628c4 /mirror ext4 defaults 0 2

You can use mdadm to get the UUID of the RAID5 container

  • mdadm ––examine ––scan

And you can use blkid to get the UUID of the file system

  • blkid

You should probably make sure that you have SMART monitoring installed on your system so that you can monitor the status (and predictive failure) of drives. To get information on the RAID5 container you can use the Disk Utility (GUI) or just type

  • cat /proc/mdstat

There are many resources on setting RAID5 sub-systems on Linux; for starters you can simply look at the man pages on the mdadm command.

NOTE: This procedure was developed and tested using Ubuntu 10.04 LTS x64 Desktop.

Originally posted 2010-06-29 02:00:15.

Ubuntu – RAID Creation

I think learning how to use mdadm (/sbin/mdadm) is a good idea, but in Ubuntu Desktop you can use Disk Utility (/usr/bin/palimpsest) to create most any of your RAID (“multiple disk”) configurations.

In Disk Utility, just access “File->Create->Raid Array…” on the menu and choose the options.  Before doing that, you might want to clear off the drives you’re going to use (I generally create a fresh GTP partition to insure the drive is ready to be used as a component of the RAID array).

Once you’ve created the container with Disk Utility; you can even format it with a file system; however, you will still need to manually add the entries to /etc/mdadm/mdadm.conf and /etc/fstab.

One other minor issue I noticed.

I gave my multiple disk containers names (mirror00, mirror01, …) and Disk Utility will show them mounted on device /dev/md/mirror00 — in point of fact, you want to use device names like /dev/md0, /dev/md1, … in the /etc/mdadm/mdadm.conf file.  Also, once again, I highly recommend that you use the UUID for the array configuration (in mdadm.conf) and for the file system (in fstab).

Originally posted 2010-07-12 02:00:33.