Entries Tagged as 'Linux'

Macbuntu

Macbuntu isn’t a sanctioned distribution of Ubuntu like Kubuntu, Xubuntu, etc; rather it’s a set of scripts that turns an Ubuntu desktop into something that resembles a Mac running OS-X… but it’s till very much Ubuntu running gdm (GNOME).

I don’t recommend install Macbuntu on a production machine, or even a real machine until you’ve taken it for a spin around the block.  For the most part it’s eye candy; but that said, it does make a Mac user feel a little more comfortable at an Ubuntu workstation, and there’s certainly nothing wrong with the desktop paradigm (remember, the way GNOME, KDE, XFCE, Enlightenment, Windows, OS-X, etc work is largely arbitrary — it’s just a development effort intended to make routine user operations intuitive and simply; but no two people are the same, and not everyone finds a the “solution” to a particular use case optimal).

What I recommend you do is create a virtual machine with your favorite virtualization software; if you don’t have virtualization software, consider VirtualBox — it’s still free (until Larry Ellison decides to pull the plug on it), and it’s very straight forward for even novices to use.

Install Ubuntu 10.10 Desktop (32-bit is fine for the test) in it, and just take all the defaults — it’s easy, and no reason to fine tune a virtual machine that’s really just a proof-of-concept.

After that, install the virtual guest additions and do a complete update…

Once you’re done with all that, just open a command prompt and type each of the following (without elevated privileges).

  • wget https://downloads.sourceforge.net/project/macbuntu/macbuntu-10.10/v2.3/Macbuntu-10.10.tar.gz -O /tmp/Macbuntu-10.10.tar.gz
  • tar xzvf /tmp/Macbuntu-10.10.tar.gz -C /tmp
  • cd /tmp/Macbuntu-10.10/
  • ./install.sh

Once you’ve followed the on-screen instructions and answered everything to install all the themes, icons, wallpapers, widgets, and tools (you’ll have to modify Firefox and Thunderbird a little more manually — browser windows are opened for you, but you have to install the plug-ins yourself), you reboot and you’re presented with what looks very much like OS-X (you actually get to see some of the eye candy as it’s installed).

Log in… and you see even more Mac-isms… play play play and you begin to get a feel of how Apple created the slick, unified OS-X experience on top of BSD.

Now if you’re a purist you’re going to push your lower lip out and say this isn’t anything like OS-X… well, maybe it doesn’t carry Steve Job’s DNA fingerprint, but for many users I think you’ll hear them exclaim that this is a significant step forward for making Linux more Mac-ish.

There are a couple different efforts to create a Mac like experience under Linux; Macbuntu is centric on making Ubuntu more like OS-X, and as far as I can see it’s probably one of the cleanest and simplest ways to play with an OS-X theme on top of Linux…

If you find you like it, then go ahead and install on a real machine (the eye candy will be much more pleasing with a manly video card and gpu accelerated effects), and you can uninstall it if you like — but with something this invasive I’d strongly encourage you to follow my advice and try before you buy (so to speak — it’s free, but time and effort count for a great deal).

I’ll make a post on installing Macbuntu for tomorrow so that it’s a better reference.

Macbuntu on SourceForge.net

Macbuntu

Originally posted 2010-11-14 02:00:36.

Anti-Malware Programs

First, malware is a reality and no operating system is immune to it.

Malware is most common on operating systems that are prevalent (no reason to target 1% of the installed base now is there); so an obscure operating system is far less likely to be the target of malware.

Malware is most common on popular operating systems that generally do not require elevation of privileges to install (OS-X, *nix, Vista, and Server 2008 all require that a user elevate their privileges before installing software, even if they have rights to administer the machine).

The reality is that even a seasoned computer professional can be “tricked” into installing malware; and the only safe computer is a computer that’s disconnected from the rest the world and doesn’t have any way to get new software onto it (that would probably be a fairly useless computer).

Beyond exercising common sense, just not installing software you don’t need or are unsure of (remember, you can install and test software in a virtual machine using UNDO disks before you commit it to a real machine), and using a hardware “firewall” (residential gateway devices should be fine as long as you change the default password, disable WAN administration, and use WPA or WPA2 on your wireless network) between you and your high-speed internet connection; using anti-malware software is your best line of defense.

There are a lot of choices out there, but one of the best you’ll find is Avast! — there’s a free edition for non-commercial use, and of course several commercial version for workstations and servers.

My experience is that on all but the slowest computers Avast! performs well, and catches more malware than most any of the big-name commercial solutions.

For slower computers that you need mal-ware protection for, consider AVG (they also have a free version for non-commercial use); I don’t find it quite as good as Avast! at stopping as wide a range of threats, but it’s much lower on resource demands (and that helps to keep your legacy machine usable).

Originally posted 2009-01-02 12:00:01.

Virtualization Outside the Box

I’ve posted many an article on virtualization, but I felt it was a good time to post just an overview of the choices for virtualization along with a short blurb on each.

Obviously, the operating system you choose and the hardware you have will greatly limit the choices you have in making a virtualization decisions.  Also, you should consider how you intend to use virtualization (and for what).

Microsoft VirtualPC (Windows and a very outdated PowerPC Mac version) – it’s free, but that doesn’t really offset the fact that VirtualPC is aging technology, it’s slow, and it had been expected to die (but was used ad the basis for Windows 7 virtualization).

Microsoft Hyper-V (Windows Server 2008, “bare metal”) – you can get a free Hyper-V server distribution, but you’ll find it hard to use without a full Server 2008.  Hyper-V is greatly improved over VirtualPC, but it implements a rather dated set of virtual hardware, and it really doesn’t perform as well as many other choices and it will only run on hardware that supports hardware virtualization (I-VT or AMD-V).

VMware (Windows, Mac, Linux) – I’ll lump all of their product into one and just say it’s over-priced and held together by chewing gum and band-aids.  I’d recommend you avoid it — even the free versions.

VirtualBox (Windows, Mac, Linux, bare metal) – Sun (now Oracle) produces a commercial and open source (community) edition of an extremely good virtualization solution.  Primarily targeted at desktops it implements a reasonably modern virtual machine, and will run on most any hardware.

Parallels (Windows, Mac, Linux, bare metal) – a very good virtualization solution, but it’s expensive — and it will continue to cost you money over and over again (upgrades are essential and not free between versions).  You can do much better for much less (like free).

QEMU (Windows, Linux, etc) – this is one of the oldest of the open source projects, and the root of many.  It’s simple, it works, but it’s not a good solution for most users.

Kernel-based Virtual Machines (KVM — don’t confuse it with Keyboard/Video/Mouse switches, the TLA is way overloaded) – this is the solution that Ubuntu (and other Linux distributions) choose for virtualization (though Ubuntu recommends VirtualBox for desktop virtualization).  KVM makes is moderately complicated to setup guest machines, but there are GUI add-ons as well as other tools that greatly simplify the tasks.

Xen (Linux) – an extremely good hypervisor implementation (the architecture of Hyper-V and Xen share many of the same fundamental designs), it will run Xen enabled (modified) kernels efficiently on any hardware, but requires hardware assisted virtualization for non-modified kernels (like Windows).

XenSource (bare-metal [Linux]) – this is a commercial product (though now available at no cost) acquired by Citrix which also includes a number of enterprise tools.  All the comments of Xen (above) apply with the addition that this package is ready (and supported) for enterprise applications and is cost effective is large and small deployments.


My personal choice remains VirtualBox for desktop virtualization on Windows, Mac, and Linux, but if I were setting up a virtual server I’d make sure I evaluated (and would likely choose) XenSource (it’s definitely now a much better choice than building a Hyper-V based solution).

Originally posted 2010-05-03 02:00:58.

Virtulization, Virtulization, Virtulization

For a decade now I’ve been a fan of virtulization (of course, that’s partially predicated on understanding what virtualization is, and how it works — and it’s limitation).

For software developers it offers a large number of practical uses… but more and more the average computer user is discovering the benefits of using virtual machines.

In Windows 7 Microsoft has built the “Windows XP” compatibility feature on top of virtualization (which means to use it you’ll need a processor that supports hardware virtualization — so many low end computers and notebooks aren’t going to have the ability to use the XP compatability feature).

While Windows 7 might make running older programs a seamless, you can (of course) install another virtualization package and still run older software.

Which virtualization package to choose???

Well, for me it’s an easy choice…

  • Windows Server 2008 on machines that have hardware virtualization – HyperV
  • Windows 7 on machines that have hardware virtualization – Virtual PC
  • All others (Windows, OS-X, Linux) – Virtual Box

Now, the disclaimers… if I were running a commercial enterprise; and I didn’t want to spend the money to buy Windows Server 2008, Microsoft does offer Windows Server 2008 – Virtual Server Edition for no cost (you really need one Windows Server 2008 in order to effectively manage it — but you can install the tools on Vista if you really don’t have it in your budget to buy a single license).

And no, I wouldn’t choose Linux OR OS-X as the platform to run a commercial virtualization infrastructure on… simply because device support for modern hardware (and modern hardware is what you’re going to base a commercial virtualization infrastructure on if you’re serious) is unparalleled PERIOD.

If you’re running Vista or Vista 64 you may decide to user Virtual PC ( a better choice would be Virtual Server 2005 R2); but Virtual Box is being actively developed, and it’s hardware reference for virtualization is much more modern (and I feel a better choice).

To make it simple… the choice comes down to Microsoft HyperV derived technology or Virtual Box.  Perhaps if I were a *nix biggot I’d put Xen in the loop, but like with so many Linux centric projects there are TOO MANY distributions, and too many splinter efforts.

One last note; keep in mind that you need a license for any operating system that you run in a virtual environment.

Originally posted 2009-08-12 01:00:34.

Compression

There are two distinct features that Windows Server 2008 outshines Linux on; and both are centric on compression.

For a very long time Microsoft has supported transparent compression as a part of NTFS; you can designate on a file-by-file or directory level what parts of the file system are compressed by the operating system (applications need do nothing to use compressed files).  This feature was probably originally intended to save the disk foot print of seldom used files; however, with the explosive growth in computing power what’s happened is that compressed files can often be read and decompressed much faster from a disk than a uncompressed file can.  Of course, if you’re modifying say a byte or two in the middle of a compressed file over and over, it might not be a good idea to mark it as compressed — but if you’re basically reading the file sequentially then compression may dramatically increase the overall performance of the system.

The reason for this increase is easy to understand; many files can be compressed ten to one (or better), that means each disk read is reading effectively ten times the information, and for a modern, multi-core, single-instruction/multiple-data capable processor to decompress this stream of data put no appreciable burden on the processing unit(s).

Recently, with SMBv2, Microsoft has expanded the file sharing protocol to be able to transport a compressed data stream, or even a differential data stream (Remote Differential Compression – RDC) rather than necessarily having to send every byte of the file.  This also has the effect of often greatly enhancing the effect data rate, since once again a modern, multi-core, single-instruction/multiple-data capable processor can compress (and decompress) a data stream at a much higher rate than most any network fabric can transmit the data (the exception would be 10G).  In cases of highly constrained networks, or networks with extremely high error rates the increase in effect through put could be staggering.

Unfortunately, Linux lags behind in both areas.

Ext4 does not include transparent compression; and currently no implementation of SMBv2 is available for Linux servers (or clients).

While there’s no question, what-so-ever, that the initial cost of a high performance server is less if Linux is chosen as the operating system, the “hidden” costs of lacking compression may make the total cost of ownership harder to determine.

Supporting transparent compression in a file system is merely a design criteria for a new file system (say Ext5 or Ext4.1); however, supporting SMBv2 will be much more difficult since (unlike SMBv1) it is a closed/proprietary file sharing protocol.

Originally posted 2010-07-11 02:00:49.

Linux usability

While doing my preliminary look at usability in several Linux distributions that had adopted a Mac-ish paradigm I decided I needed to lay several ground rules to fully review them.

First, I decided that using a virtual machine was fine for getting intial impressions, but that just wasn’t going to be acceptable for a complete review… and I also decide that doing a review on only one piece of hardware wasn’t going to give me a very good idea of what problems a user might see related to the computer.

It’s certainly no problem for me to find a computer or two to install these Linux distributions on and run them through their paces; however, I don’t have any “low-end” hardware, so my tests are going to use fairly current generations of hardware, so be aware that my impressions might not match your impression if you’re planning on running these on hardware that is more than a couple years old (and by a couple year old I mean hardware who’s components were current no more than two years ago).

I’ll perform the following:

  1. Install the distribution (without requiring any settings manually)
  2. Update itself (and applications)
  3. Start up, shut down, log on, log off
  4. Browse the web (that’s a given)
  5. Read email (including setting up the email program)
  6. Play a CD (music)
  7. Play several music files
  8. Play a DVD (movie)
  9. Play several video files
  10. Edit a WYSIWYG document
  11. Edit an image
  12. View and print a PDF
  13. Access a thumb drive
  14. Access files stored on a network device
  15. Access secure digital media (though a USB card reader)
  16. Scan an image
  17. Open a ZIP archive; create a ZIP archive
  18. Email an attachment, recover an email attachment
  19. Install a new (and useful) application
  20. Alter the appearance (preferably using a theme)

Beyond these simple tests I’ll try and appraise the simplicity, clarity, and ease of use of the interface… I’ll also comment on the overall appearance, the look and feel.

Originally posted 2010-01-08 01:00:19.

Ubuntu – Desktop Search

Microsoft has really shown the power of desktop search in Vista and Windows 7; their newest Desktop Search Engine works, and works well… so in my quest to migrate over to Linux I wanted to have the ability to have both a server style as well as a desktop style search.

So the quest begun… and it was as short a quest as marching on the top of a butte.

I started by reviewing what I could find on the major contenders (just do an Internet search, and you’ll only find about half a dozen reasonable articles comparing the various desktop search solutions for Linux)… which were few enough it didn’t take very long (alphabetical):

My metrics to evaluate a desktop search solutions would focus on the following point:

  • ease of installation, configuration, maintenance
  • search speed
  • search accuracy
  • ease of access to search (applet, web, participation in Windows search)
  • resource utilization (cpu and memory on indexing and searching)

I immediately passed on Google Desktop Search; I have no desire for Google to have more access to information about me; and I’ve tried it before in virtual machines and didn’t think very much of it.

Begal

I first tried Beagle; it sounded like the most promising of all the search engines, and Novel was one of the developers behind it so I figured it would be a stable baseline.

It was easy to install and configure (the package manager did most of the work); and I could use the the search application or the web search, I had to enable it using beagle-config:

beagle-config Networking WebInterface true

And then I could just goto port 4000 (either locally or remotely).

I immediately did a test search; nothing came back.  Wow, how disappointing — several hundred documents in my home folder should have matched.  I waited and tried again — still nothing.

While I liked what I saw, a search engine that couldn’t return reasonable results to a simple query (at all) was just not going to work for me… and since Begal isn’t actively developed any longer, I’m not going to hold out for them to fix a “minor” issue like this.

Tracker

My next choice to experiment with was Tracker; you couldn’t ask for an easier desktop search to experiment with on Ubuntu — it seems to be the “default”.

One thing that’s important to mention — you’ll have to enable the indexer (per-user), it’s disabled by default.  Just use the configuration tool (you might need to install an additional package):

tracker-preferences

Same test, but instantly I got about a dozen documents returned, and additional documents started to appear every few seconds.  I could live with this; after all I figured it would take a little while to totally index my home directory (I had rsync’d a copy of all my documents, emails, pictures, etc from my Windows 2008 server to test with, so there was a great deal of information for the indexer to handle).

The big problem with Tracker was there was no web interface that I could find (yes, I’m sure I could write my own web interface; but then again, I could just write my own search engine).

Strigi

On to Strigi — straight forward to install, and easy to use… but it didn’t seem to give me the results I’d gotten quickly with Tracker (though better than Beagle), and it seemed to be limited to only ten results (WTF?).

I honestly didn’t even look for a web interface for Strigi — it was way too much a disappointment (in fact, I think I’d rather have put more time into Beagle to figure out why I wasn’t getting search results that work with Strigi).

Recoll

My last test was with Recoll; and while it looked promising from all that I read, but everyone seemed to indicate it was difficult to install and that you needed to build it from source.

Well, there’s an Ubuntu package for Recoll — so it’s just as easy to install; it just was a waste of effort to install.

I launched the recoll application, and typed a query in — no results came back, but numerous errors were printed in my terminal window.  I checked the preferences, and made a couple minor changes — ran the search query again — got a segmentation fault, and called it a done deal.

It looked to me from the size of the database files that Recoll had indexed quite a bit of my folder; why it wouldn’t give me any search results (and seg faulted) was beyond me — but it certainly was something I’d seen before with Linux based desktop search.

Conclusions

My biggest conclusion was that Desktop Search on Linux just isn’t really something that’s ready for prime time.  It’s a joke — a horrible joke.

Of the search engines I tried, only Tracker worked reasonably well, and it has no web interface, nor does it participate in a Windows search query (SMB2 feature which directs the server to perform the search when querying against a remote file share).

I’ve been vocal in my past that Linux fails as a Desktop because of the lack of a cohesive experience; but it appears that Desktop Search (or search in general) is a failing of Linux as both a Desktop and a Server — and clearly a reason why choosing Windows Server 2008 is the only reasonable choice for businesses.

The only upside to this evaluation was that it took less time to do than to read about or write up!

Originally posted 2010-07-06 02:00:58.

Dynamic IP Filtering (Black Lists)

There are a number of reasons why you might want to use a dynamic black list of IP addresses to prevent your computer from connecting to or being connect to by users on the Internet who might not have your best interests at heart…

Below are three different dynamic IP filtering solutions for various operating systems; each of them are open source, have easy to use GUIs, and use the same filter list formats (and will download those lists from a URL or load them from a file).

You can read a great deal more about each program and the concepts of IP blocking on the web pages associated with each.

Originally posted 2010-08-17 02:00:55.

USB Hard Drive Adapters

 Everyone’s making them and they come in really handy…

 Basically they’re devices you can use to access a bare hard drive.  Most of them supports PATA and SATA 2.5″ and 3.5″ drives (though some vendors require a bunch of adapters to do it).  The APRICORN DriveWire unit is clean and simple and priced around $30 (use a price search engine) or less.

I was so happy to find these units that I purchased two of them and gave away my previous ones made by another vendor.

If you’re going to routinely swap drives on and off a computer, and don’t want to spring for an external case you might be better off with a hard drive dock also available for about $30, but they don’t support PATA (PATA is not hot swapable).

If you’re going to use these units to upgrade a computer’s hard drive, remember Acronis TrueImage is a great tool (you can find shareware and OpenSource tools as well — but TrueImage is well worth the price and has many additional features that you’ll likely find useful).


APRICORN: DriveWire – Universal Hard Drive Adapter

Originally posted 2008-12-29 12:00:32.

Virtual machines need regular defragging, researcher says

This comes from an article on ComputerWorld, all I can say is duh!

Virtual disks require the same fragmentation as the same operating system would running on physical machines; plus if you choose dynamically expanding containers for the disk on the host, you’ll likely need to power down the machine and periodically defragment the host as well.

You’d think that an article that starts with a title like that couldn’t possible get any more asinine; well, you’d be wrong:

Windows, as well as third-party software firms, offer defragmenters to reassemble fragmented files. Fragmentation is not as large of a problem on Unix systems, due to the way that the OS writes files to disk.

Apparently the author seems to think that just because Windows includes software to defragment the file system, it must be much more susceptible to fragmentation.  He’d be right if we were talking about Windows 98 or if people choose not to run NTFS… but he and the article he references are dead wrong.

NTFS has almost identical abilities as EXT2, EXT3, and EXT4 file systems to avoid fragmentation — the difference is that NTFS supports defragmentation of the file system (and Windows ships with a rudimentary defragmenter).  In fact, if *nix file system were so impervious to fragmentation, why would the ability to defragment be one of the major feature additions in EXT4 (though not fully implemented yet)?

There are many thing about *nix type operating systems that can clearly be pointed to as superior than Windows, the resistance to fragmentation simply isn’t one; WAKE UP and live in the current millennium, we don’t need to confuse FAT16/FAT32 with Windows.

Virtual machines need regular defragging, researcher says
By Joab Jackson on ComputerWorld

Originally posted 2010-10-12 02:00:44.