Entries Tagged as 'Linux'

conglomeration

con·glom·er·a·tion (kn-glm-rshn)
n.

    1. The act or process of conglomerating.
    2. The state of being conglomerated.
  1. An accumulation of miscellaneous things.

The American Heritage® Dictionary of the English Language, Fourth Edition copyright ©2000 by Houghton Mifflin Company. Updated in 2009. Published by Houghton Mifflin Company. All rights reserved.


conglomeration [kənˌglɒməˈreɪʃən] n

  1. a conglomerate mass
  2. a mass of miscellaneous things
  3. the act of conglomerating or the state of being conglomerated

Collins English Dictionary – Complete and Unabridged © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003


conglomeration a cluster; things joined into a compact body, coil, or ball.

Examples: conglomeration of buildings, 1858; of chances; of Christian names, 1842; of men, 1866; of sounds, 1626; of threads of silk worms, 1659; of vessels, 1697; of words.

Dictionary of Collective Nouns and Group Terms. Copyright 2008 The Gale Group, Inc. All rights reserved.


The SCO infringement lawsuit over the Unix trademark is over… the Supreme Court has ruled that Novell owns the Unix trademark and copyright, and SCO has no grounds for it’s litigation against.  Just as Microsoft owned and retained the Xenix copyright while SCO distributed that operating system, so Novell retained the Unix copyright while SCO distributed that operating system.

While means, Novell now has a prime asset — and could be ripe for harvesting (that’s a poetic way to say merger, take-over, buy-out).

Which will likely be bad for Linux.

WHAT?

Yep, take a look at what happened when Oracle purchased Sun (one of the largest companies supporting Open Source innovation in Linux, virtualization, etc) there’s definitely movement in Oracle to retract from the Open Source and free (free – like free beer) software efforts that Sun was firmly behind.

Consider what happens if a company acquires Novell and uses the SystemV license from Novell to market a closed source operating system, and discontinues work on Suse; or at minimum decides it doesn’t distributed Suse for free (free – like free beer).

“Live free or die” might become a fading memory.

Originally posted 2010-06-05 02:00:18.

FileZilla – The free FTP solution

If you have a need to transfer files via FTP, SFTP, SCP, etc and you prefer to user a graphical user interface on a Windows, Mac, or Linux machine — then the Open Source FileZilla is a very good solution to consider.

Just download the client, install it, and within a few moments you’ll have a connection to a server (that you can save the information for quick reuse if you like).  The interface is clean and easy to understand, and supports drag-and-drop as well as transfers from the multi-pane manager.

And, you can’t beat the price – FREE.

http://filezilla-project.org/

Originally posted 2011-09-13 02:00:47.

Disk Bench

I’ve been playing with Ubuntu here of late, and looking at the characteristics of RAID arrays.

What got me on this is when I formatted an ext4 file system on a four drive RAID5 array created using an LSI 150-4 [hardware RAID] controller I noticed that it took longer than I though it should; and while most readers probably won’t be interested in whether or not to use the LSI 150 controller they have in their spare parts bin to create a RAID array on Linux, the numbers below are interesting just in deciding what type of array to create.

These numbers are obtained from the disk benchmark in Disk Utility; this is only a read test (write performance is going to be quite a bit different, but unfortunately the write test in Disk Utility is destructive, and I’m not willing to lose my file system contents at this moment; but I am looking for other good benchmarking tools).

drives avg access time min read rate max read rate avg read rate

ICH8 Single 1 17.4 ms 14.2 23.4 20.7 MB/s
ICH8 Raid1 (Mirror) 2 16.2 ms 20.8 42.9 33.4 MB/s
ICH8 Raid5 4 18.3 ms 17.9 221.2 119.1 MB/s
SiL3132 Raid5 4 18.4 ms 17.8 223.6 118.8 MB/s
LSI150-4 Raid5 4 25.2 ms 12.5 36.6 23.3 MB/s

All the drives used are similar class drives; Seagate Momentus 120GB 5400.6 (ST9120315AS) for the single drive and RAID1 (mirror) tests, and Seagate Momentus 500GB 5400.6 (ST9500325AS) for all the RAID5 tests.  Additionally all drives show that they are performing well withing acceptable operating parameters.

Originally posted 2010-06-30 02:00:09.

File System Fragmentation

All file systems suffer from fragmentation.

Let me rephrase that more clearly in case you didn’t quite get it the first time.

ALL FILE SYSTEMS SUFFER FROM FRAGMENTATION <PERIOD>.

It doesn’t matter what file system you use one your computer, if you delete and write files it will become fragmented over time.  Some older file systems (like say FAT and FAT32) had major performance issues as the file system began to fragment, more modern file systems do not suffer as much performance lose from fragmentation, but still suffer.

If you want to argue that your writable file system doesn’t fragment, you haven’t a clue what you’re talking about, so read up on how your file system really works and how block devices work to understand why you just can’t have a file system that doesn’t fragment files or free space or both.

What can you do about fragmentation?

Well, you might not really need to do anything, modern disk drives are fast; and on a computer that’s doing many things at once the fragmentation may not have much of any impact on your performance, but after awhile you’re probably going to want to defragment your files.

The act of copying a file will generally defragment it; most modern file systems will attempt to allocate contiguous space for a file if it can (files that grow over time cannot be allocated contiguous, but they can be defragmented at their current size).

On many operating systems you can actually get programs that are designed to defragment your file system.

How often should you defragment your file system?

Well, I generally recommend you do it right after installing and updating your computer; and then any time you make major changes (large software installation, large update, etc).  But that you not do it automatically or an a routine schedule — there’s not enough benefit to that.

You can also analyze your disk (again using software) to determine how fragmented it is… and then defragment when it reaches some point that you believe represents a performance decrease.

Also, try and keep your disk clean, delete your browser cache, temporary files, duplicate files, and clutter — the less “junk” you have on your disk, the less need there will be for defragmenting.

Originally posted 2009-01-05 12:00:03.

Linux – Desktop Search

A while ago I published a post on Desktop Search on Linux (specifically Ubuntu).  I was far from happy with my conclusions and I felt I needed to re-evaluate all the options to see which would really perform the most accurate search against my information.

Primarily my information consists of Microsoft Office documents, Open Office documents, pictures (JPEG, as well as Canon RAW and Nikon RAW), web pages, archives, and email (stored as RFC822/RFC2822 compliant files with an eml extension).

My test metrics would be to take a handful of search terms which I new existed in various types of documents, and check the results (I actually used Microsoft Windows Search 4.0 to prepare a complete list of documents that matched the query — since I knew it worked as expected).

The search engines I tested were:

I was able to install, configure, and launch each of the applications.  Actually none of them were really that difficult to install and configure; but all of them required searching through documentation and third party sites — I’d say poor documentation is just something you have to get used to.

Beagle, Google, Tracker, Pinot, and Recoll all failed to find all the documents of interest… none of them properly indexed the email files — most of the failed to handle plain text files; that didn’t leave a very high bar to pick a winner.

Queries on Strigi actually provided every hit that the same query provided on Windows Search… though I have to say Windows Search was easier to setup and use.

I tried the Neopomuk (KDE) interface for Strigi — though it just didn’t seem to work as well as strigiclient did… and certainly strigiclient was pretty much at the top of the list for butt-ugly, user-hostile, un-intuitive applications I’d ever seen.

After all of the time I’ve spent on desktop search for Linux I’ve decided all of the search solutions are jokes.  None of them are well thought out, none of them are well executed, and most of them out right don’t work.

Like most Linux projects, more energy needs to be focused on working out a framework for search than everyone going off half-cocked and creating a new search paradigm.

The right model is…

A single multi-threaded indexer running in the background indexing files according to a system wide policy aggregated with user policies (settable by each user on directories they own) along with the access privileges.

A search API that takes the user/group and query to provide results for items that the user has (read) access to.

The indexer should be designed to use plug-in modules to handle particular file types (mapped both by file extension, and by file content).

The index should also be designed to use plug-in modules for walking a file system and receiving file system change events (that allows the framework to adapt as the Linux kernel changes — and would support remote indexing as well).

Additionally, the index/search should be designed with distributed queries in mind (often you want to search many servers, desktops, and web locations simultaneously).

Then it becomes a simple matter for developers to write new/better indexer plug-ins; and better search interfaces.

I’ve pointed out in a number of recent posts that you can effective use Linux as a server platform in your business; however, it seems that if search is a requirement you might want to consider ponying up the money for Microsoft Windows Server 2008 and enjoy seamless search (that works) between your Windows Vista / Windows 7 Desktops and Windows Server.

REFERENCES:

Ubuntu – Desktop Search

Originally posted 2010-07-16 02:00:19.

rsync

I’ve gotta ask the question why no one has ever invested the time and energy into making a port of the client code in rsync as a native Windows executable.

Yes, you can certainly run rsync client (and server even) under Cygwin; but it just seems like backing up files on a workstation to a central store efficiently is something that might have occurred to a Windows person over the years.

Could it be that people only think in terms of homogeneous solutions?  That they simply can’t conceive that the best selection of a server might not involve the same operating system as a workstation or desktop?

Yeah — I understand that since many Windows desktops talk to Windows servers rsync isn’t a commercially viable solution (or even hobbyist solution) unless you have both server and client, but in many cases a Windows desktop talks to a *nix based sever (or NAS) and all you really need to be able to do is run an rsync client.

The benefits of rsync seems to be to be well worth implementing a client on Windows — while the newest version of the file sharing protocol in Windows Vista, Windows Server 2008, and Windows 7 have the ability to do differential file copy, it’s not something that’s likely to implemented in an optimized fashion in non-Microsoft storage systems (and isn’t going to be implement in Windows XP or Windows Server 2003 at all); nor is there any reason to really depend of a file sharing protocol for synchronization.

Anyway — rsync is a very efficient tool, and something you might want to keep in your toolbox.

Originally posted 2010-06-04 02:00:01.

Virtualization Outside the Box

I’ve posted many an article on virtualization, but I felt it was a good time to post just an overview of the choices for virtualization along with a short blurb on each.

Obviously, the operating system you choose and the hardware you have will greatly limit the choices you have in making a virtualization decisions.  Also, you should consider how you intend to use virtualization (and for what).

Microsoft VirtualPC (Windows and a very outdated PowerPC Mac version) – it’s free, but that doesn’t really offset the fact that VirtualPC is aging technology, it’s slow, and it had been expected to die (but was used ad the basis for Windows 7 virtualization).

Microsoft Hyper-V (Windows Server 2008, “bare metal”) – you can get a free Hyper-V server distribution, but you’ll find it hard to use without a full Server 2008.  Hyper-V is greatly improved over VirtualPC, but it implements a rather dated set of virtual hardware, and it really doesn’t perform as well as many other choices and it will only run on hardware that supports hardware virtualization (I-VT or AMD-V).

VMware (Windows, Mac, Linux) – I’ll lump all of their product into one and just say it’s over-priced and held together by chewing gum and band-aids.  I’d recommend you avoid it — even the free versions.

VirtualBox (Windows, Mac, Linux, bare metal) – Sun (now Oracle) produces a commercial and open source (community) edition of an extremely good virtualization solution.  Primarily targeted at desktops it implements a reasonably modern virtual machine, and will run on most any hardware.

Parallels (Windows, Mac, Linux, bare metal) – a very good virtualization solution, but it’s expensive — and it will continue to cost you money over and over again (upgrades are essential and not free between versions).  You can do much better for much less (like free).

QEMU (Windows, Linux, etc) – this is one of the oldest of the open source projects, and the root of many.  It’s simple, it works, but it’s not a good solution for most users.

Kernel-based Virtual Machines (KVM — don’t confuse it with Keyboard/Video/Mouse switches, the TLA is way overloaded) – this is the solution that Ubuntu (and other Linux distributions) choose for virtualization (though Ubuntu recommends VirtualBox for desktop virtualization).  KVM makes is moderately complicated to setup guest machines, but there are GUI add-ons as well as other tools that greatly simplify the tasks.

Xen (Linux) – an extremely good hypervisor implementation (the architecture of Hyper-V and Xen share many of the same fundamental designs), it will run Xen enabled (modified) kernels efficiently on any hardware, but requires hardware assisted virtualization for non-modified kernels (like Windows).

XenSource (bare-metal [Linux]) – this is a commercial product (though now available at no cost) acquired by Citrix which also includes a number of enterprise tools.  All the comments of Xen (above) apply with the addition that this package is ready (and supported) for enterprise applications and is cost effective is large and small deployments.


My personal choice remains VirtualBox for desktop virtualization on Windows, Mac, and Linux, but if I were setting up a virtual server I’d make sure I evaluated (and would likely choose) XenSource (it’s definitely now a much better choice than building a Hyper-V based solution).

Originally posted 2010-05-03 02:00:58.

Remember when…

Remember when it was just so darn easy to share files with other computers on your local area (home) network?  It was ever simple to share files between PCs and Macs.

Have you noticed that while Windows was once a very easy platform to share files with others from it’s become almost impossible to even share files between two PCs running the same version of Windows?

If Microsoft is seeking to make their operating system more secure by making it unusable I they are getting very close to realizing their objective.

I really have grown tired of the complexities of sharing folders between PCs, more and more I’m finding that just using Box or Dropbox, or Google Drive is a much more efficient way to transfer small numbers of files between two machines — even if it’s a one time transfer.  I mean, yeah, it’s kinda retarded to send files to cloud storage potentially on the other side of the country to just copy it to a machine that’s a few feet away — but let’s be serious, it’s quicker than figuring out why Windows say the same user (with the same password) on two different machines, who should have unlimited rights to a directory can’t copy a file from and certainly can’t copy a file to a machine.

Yeah, it may seem retarded, but the days of using *nix copy command between remote machines seems easier…

Microsoft needs to take a hard look at human factors, and not of all the wizzy new feature they keep adding to their operating system, but to the foundation features that people (all people) actually use day in and day out for productivity — after all, we don’t all have domains at home… and not only do we sometimes move files between machines we own, but occasionally some of us might have a friend with a laptop come over.

I guess that’s why I keep a few fairly large USB drives around, because Microsoft certainly doesn’t want to actually make computers that run their operating system usable.

Originally posted 2013-11-03 10:00:23.

Thinking Inside the VirtualBox

Sun Microsystems used to be a major player in the computer world; and I guess since Java belongs to Sun they are still a a fairly major force…

There’s a number of open source or free projects that Sun sponsors:

And, of course, it’s VirtualBox that has inspired this post.

VirtualBox 2.0.4 released on 24 October 2008, and from my initial experiences with it, it’s a contender.

A fairly mature x86/x64 virtualization framework for x86/x64 platforms.  VirtualBox runs on Windows, OS-X, Linux, and of course Solaris.

What sets it apart — well it’s to my knowledge the only fairly mature cross-platform virtualization framework that’s FREE on all platforms.

In general it doesn’t require hardware virtualization support with the exception that to run a x64 guest you must be on an x64 host with hardware virtualization.

Going through the list of features and playing with it there’s really nothing I couldn’t find that it didn’t do (and in playing with it, it seemed to work well)… the one feature that VirtualBox supports that none of it’s competitors had last time I looked (and that Hyper-V is sorely missing) is SATA (AHCI – Advanced Host Controller Interface) support… that provides much more efficient emulation of disk channel connections to the guest (and thus much better performance — and if you recall from my post on Hyper-V the fact that Microsoft doesn’t have SCSI boot support or AHCI support at all is what prevents me from moving to Hyper-V).

VirtualBox does apparently support VMWare virtual disks, but not Microsoft virtual disks (both of them provide open specifications, so my only conclusion is that Sun’s anti-Microsoft bias is at play which is sad since VirtualPC, Virtual Server, and Hyper-V account for a fairly substantial segment of the market, and a growing segment).

Like any product, you really need to carefully evaluate it based on your needs, but my feeling is that certainly for Mac users this might be the choice if you don’t want to by Parallels Desktop… and for Windows desktops this looks to be a very good.

NOTES:

On Windows if you want to use this on a server host machine (ie one that doesn’t require users to control the virtual machine) VirtualBox doesn’t really provide any interface for controlling machines in this manner; however, you can launch a VirtualBox machine from the command line, so you can have your server start up VirtualBox sessions at boot… though there are no tools provided by VirtualBox for managing running instances started this way.  My recommendation is that the VirtualBox team add a tool to manage and launch instances in a server environment.

On Windows (and other OSs) the way VirtualBox handles host networking (the default is a NAT’d network through the host… which could have some performance impact) is buy using the TUN/TAP driver.  Certainly they way Microsoft handles virtualization of the network adapter is far slicker, and I found that using host networking is not as reliable as NAT; hopefully this is an area where there will be some improvement.

Lastly, I haven’t run any actual performance tests head-to-head with  Parallels, VMWare, VirtualPC, and Virtual Server… but I can tell you that guests “feel” substantially faster running under VirtualBox (I was quite impressed — and surprised).


VirtualBox

Originally posted 2008-12-08 12:00:55.

Libre Office on Ubuntu

If you want Libre Office on Ubuntu and you just can’t wait until 28-April-2011 to upgrade to Ubuntu 11.04 (which should contain Libre Office), then here’s the quick way to make it happen…

 

First, remove Open Office

sudo apt-get remove openoffice*.*

Then setup the PPA

sudo add-apt-repository ppa:libreoffice/ppa
sudo apt-get update

Then do one of the following (based on your desktop manager)

sudo apt-get install libreoffice-gnome

sudo apt-get install libreoffice-kde

sudo apt-get install libreoffice

My recommendation is that you just wait and update your Ubuntu to 11.04 on Thursday — then remove Open Office and install Libre Office… but you are the master of your own computer.

Originally posted 2011-04-26 02:00:51.