Entries Tagged as 'Technology'

Graphene

First theorized by PR Wallace in 1947, realized by Andre Geim and Konstantin Novoselov in 2004, and awarded a Nobel Prize for Physics in 2010 it’s a material that’s made from carbon…

There’s a great deal of money funding research, and there are many practical uses for the material that we’re likely to see marketed soon — the potential has been touted to be enormous… but the expectations that graphene is a replacemnt for silicon might be a little over stated.

Here’s a bullet list — and you can read an article on graphene on BBC News — Is graphnen a miracle material?

 

  • Graphene is taken from graphite, which is made up of weakly bonded layers of carbon
  • Graphene is composed of carbon atoms arranged in tightly bound hexagons just one atom thick
  • Three million sheets of graphene on top of each other would be 1mm thick
  • The band structure of graphite was first theorized and calculated by PR Wallace in 1947, though for it to exist in the real world was thought impossible
  • Due to the timing of this discovery, some conspiracy theorists have linked it to materials at the Roswell “crash site”
  • In 2004, teams including Andre Geim and Konstantin Novoselov demonstrated that single layers could be isolated, resulting in the award of the Nobel Prize for Physics in 2010
  • It is a good thermal and electric conductor and can be used to develop semiconductor circuits and computer parts. Experiments have shown it to be incredibly strong

 

 

Originally posted 2011-06-05 02:00:00.

Windows 7 – Virtualization

So you’ve upgraded to Windows 7 and now your considering the options for running virtual machines…

If you have a PC that’s capable of hardware assisted virtualization (I-VT or AMD-V) and you’re running Windows 7 Professional or Ultimate the decision is fairly easy; use the virtualization technology from Microsoft that provides you with Virtual XP mode (as well as generalized virtualization).

However, if you don’t have a PC capable of hardware virtualization or you didn’t spring for the more expensive version of Windows you have some good (free) choices.

While Microsoft doesn’t officially support Virtual PC 2007 SP1 on Windows 7, since it was designed to run under Vista it will work.  The real downside is that you have fairly old virtualization technology emulating an antiquated hardware.

You could consider buying VMware or Parallels, but why spend money when there’s a better free alternative for personal use…

That would be – VirtualBox (yes, I’ve harped on VirtualBox for the Mac before, and now it’s time to harp on VirtualBox on the PC).

VirtualBox is a project sponsored by Sun Microsystems.  They’ve actually been working on virtualization technology for a very long time, and their virtualization technology is top notch. 

VirtualBox will run on several different operating system, you can even share the virtual machine files between operating systems if you like.  But one of the really nice things about VirtualBox is that it will support machines with or without hardware assisted virtualization and it emulates very modern hardware (which makes the paravirtualization of devices much more efficient).

Unless you have specific requirements that force you to choose other virtualization software, I would recommend you take a good look at VirtualBox.

VirtualBox

Originally posted 2009-11-14 01:00:43.

4% of the Market; 50% of the Profit

Apple’s iPhone accounts for only 4% of the cellular handset market for “feature” phones, yet account for 50% of the profits…



asymco.com

Originally posted 2010-11-29 02:00:46.

Virtualization Best Practices, Using UnDo

One of the most powerful features of virtualization is the ability to use undo disk (also called snapshots and checkpoints).

What this allows you to do is set the machine in a mode where you can decide at a later date whether or not you want to keep the changes — which is a great way test out new software in a virtual environment (NOTE:  Acronis TrueImage provides a similar capability in physical machines).

The penalty of using undo disks is that you have to commit all the changes or none of the changes; and the system will run slower.

An alternate to using the built in undo technology of the virtualization system is to copy the disk before you start the machine (it’s just a file on your hard drive), and restore it back afterwards.  Sometimes this is a better solution, particularly if you need the virtual machine to run as fast as possible and you’re not worried about the time it takes to make a copy of the disk before you run the virtual machine (NOTE:  you can simply delete the modified disk and move the copy into place when you’re done — that’s almost instantaneous).

One other thing you’ll want to be sure of is that you start the machine with undo disabled when you want to update the operating system and do maintenance.  You’ll also want to make sure that any checkpoints the operating system has created (Windows calls them “restore points”) are deleted before you complete your maintenance cycle; there’s certainly not any reason (generally) why you’d want multiple levels of “undo”.

I often use the “undo” feature to try out software I download from the internet.  I have a test machine setup with a virus scanner and I can monitor the changes the installation and running of the software attempt to make to the machine.  Plus I can try out the software and decide if it’s something valuable of not.  And there is the case where I will only need to run it once (or very rarely) and don’t want it polluting my real machine.

Developing the discipline of using virtualization with “undo” enabled can save you from a number of headaches, and is in itself a great reason to consider installing and using virtualization technology.

Originally posted 2009-01-14 12:00:42.

Fix It

About a year before Microsoft Windows 7 hit the street, Microsoft had started to introduce the “Fix It” logo associated with “solutions” to problems in Windows.

In Windows 7 Microsoft incorporated the solution center to partially automate finding and fixing issues that could cause problems with Windows.

Now Microsoft has expanded “Fix It” to include Windows Vista and Windows XP…

Thank you for your interest in Microsoft Fix it. We’re working hard to automate solutions to common software problems in an easy, intuitive way that is available when and where you need it. So whether you are looking for a solution in help or support content, or an error report, Fix it provides a way to apply automated fixes, workarounds, or configuration changes so you don’t have to perform a long list of manual steps yourself.

Microsoft Fix It

Fix It

Originally posted 2010-04-27 02:00:21.

Hyper-V Server

With the release of Windows Server 2008 Microsoft made a huge step forward in releasing thin, high-performance hyper-visor for machine virtualization – Hyper-V.

Microsoft has also baited the market by offering a free version of Windows Server 2008 specifically designed to be a virtualization host; Hyper-V Server.

I decide to play with Windows Server 2008 with Hyper-V and Hyper-V Server to get a feel for what it could do.

Installation is a snap; much the same as Vista.

With Windows Server 2008 with Hyper-V everything goes very smoothly and just works.  You can use the Hyper-V manager to setup virtual machines, run them, stop them, etc.  But one thing you want to while you have Windows Server 2008 up and running is figure out everything you need to do to remotely connect to manage Hyper-V and Server 2008 from your workstation because Hyper-V server isn’t going to allow you to do much from the console.

To say it’s a little complicated to get remote Hyper-V management working is an understatement; after I figured it out I found a tool that can help automate the setup — makes like much easier.

The one thing I never got working from Vista x64 was remote management of Windows Server 2008 – and you really need that as well (remember you don’t get much capability from the console).  I’ll probably play with that a little more; and certainly I’ll get it working before I deploy any Hyper-V servers (it’s not a huge problem if you have a Windows Server 2008 machine already, remote management of other Windows Server 2008 boxes just works).

Now after the headache of getting everything configured properly it was time to put Hyper-V through it’s paces.

First task, migrate a machine over from Virtual Server 2005 R2 SP2… piece of cake — copy over the VHD files, create a machine, hookup the disks (back track since Hyper-V seems to have a fairly set directory format for machines and disks — so if you create a new machine on Hyper-V first you’ll see the layout).  Boot the machine, connect, remove the virtual machine additions, reboot, install the new virtual machine files — asks to update the HAL (say yes), reboot, and finally install the new virtual machine files, reboot, re-generate the SID and rename the machine (I still have the old one, and I don’t want confusion)… and everything works great.  Shutdown the machine, add a second processor, start it up… and a dual processor virtual machine is born.

I migrated over 32-bit XP Professional; did a test install of 64-bit Server 2003… and every thing worked just fine.

Don’t get carried away just yet.

There’s a couple gotchas with this.

  • To effectively use the free Hyper-V Server you either need a Windows Server 2008 (full install) or you need to get the remote tools working from your workstation; that’s non-trivial.
  • To run Hyper-V Server or Windows Server 2008 with Hyper-V you need a machine with hardware virtualization and execute disable (which really isn’t that uncommon these days, just make sure your BIOS has those features enabled).
  • Once you migrate a machine to Hyper-V there’s no automated way to go back to Virtual Server 2005 R2 SP2 (sure you can probably do it — but it’s going to be a pain).
  • To get performance out of Hyper-V you really need to use SCSI virtual disks; right now Microsoft doesn’t support booting from SCSI disks in Hyper-V since they only support the para-virtualized SCSI interface.  So to get performance you have to have an IDE boot disk and run off SCSI disks (not exactly a common installation, so you probably won’t be converting any physical machines like that — and seems like it’s a nightmare just waiting to unfold).

Fortunately I’m not in a huge hurry to move to Hyper-V; I’m fairly certain since it’s a corner stone of Microsoft’s push to own the virtual infrastructure market I suspect we’ll see the issues that prevent it from being all that it can be resolved quickly.

And I’ll close with an up-note… WOW — the performance was very impressive… I really wish I had a test machine with lots of spindles to see what kind of load I could realistically put on it.

Originally posted 2008-11-15 08:00:52.

Microsoft Security Essentials

A few years ago Microsoft® provided a free Beta of it’s Anti-Virus solution; and Beta users were provided with one free license to continue to use the “One Care” branded Anti-Virus.

Now (as of 29 September 2009 – yesterday) Microsoft is once again providing a free Anti-Virus for “genuine” Windows.

Personally I use Avast’s free version; I’d consider using the Microsoft AV on servers, but the free version only support desktop versions of Windows (like Avast).

http://www.microsoft.com/security_essentials/

Originally posted 2009-09-30 01:00:29.

Browser Spelling Check

If you use Firefox you’re set, build of that have included a spell check add in for quite sometime; however, if you use Internet Explorer you’re going to want to look into a spell check add-on.

Some of the spell check add-ons depend on the presence of Microsoft’s spell check (you get that with Office products, like Word); but one of the better ones does not.

ieSpell works well, and some javaScript add-ins on web pages will automatically detect it (as they do Firefox’s spell check) and work the same; but when they don’t you have the ability to use the context menu to spell check the contents of a edit box.

For personal use ieSpell is toally free, for commercial use you should check the licensing.

Originally posted 2008-12-13 12:00:34.

Google Music – Beta

Google has launched their cloud based streaming music service as a beta; you can request an invitation (using a Gmail account) via the link below.

What does it get you?

Well, up to 20,000 songs in your cloud storage; play back support on most Android devices; play back support from a browser; and an upload program that will sync your library to the cloud.

Not bad for free.

Apple provides a similar service for $25 per year; there’s no limit to the amount of music you can store.  The main differences being that there’s no Android support (basically devices iTunes supports is supported), and Apple actually finger prints the files and serves their iTune version of the music rather than your copy (likely at a higher bit rate — they, of course, don’t incur the storage overhead).

Amazon provides a similar service for $20 per year (you also get some storage for other files); and there’s no limit to the amount of music you can store, but you might find their uploader is a little less friendly to use (OK — to be fair it’s been updated since I tested it — so maybe not).

You can play with the free 5GB version of the Amazon service and decide if you like it, and it’s worth the $20 (I was hoping they’d just bundle it into Prime — but if they’re serious about Hulu they really need to start Al-a-cart charges for services, or Prime is going to have to go up).

Anyway, if you have an Android device, I highly recommend you go ahead and request an invite to the Google Music Beta — you can try the Amazon out as well… if you have an iOS device, you’re probably stuck with the Apple solution (but you’re an Apple customer, so you’re used to having to shell out money for everything).

Also, the Amazon tablets will reportedly ship with a free Prime subscription, possibly a free year of cloud storage might be thrown in as well (that’s speculation on my part).

http://music.google.com/about/

Originally posted 2011-09-10 02:00:28.

Virtualization Outside the Box

I’ve posted many an article on virtualization, but I felt it was a good time to post just an overview of the choices for virtualization along with a short blurb on each.

Obviously, the operating system you choose and the hardware you have will greatly limit the choices you have in making a virtualization decisions.  Also, you should consider how you intend to use virtualization (and for what).

Microsoft VirtualPC (Windows and a very outdated PowerPC Mac version) – it’s free, but that doesn’t really offset the fact that VirtualPC is aging technology, it’s slow, and it had been expected to die (but was used ad the basis for Windows 7 virtualization).

Microsoft Hyper-V (Windows Server 2008, “bare metal”) – you can get a free Hyper-V server distribution, but you’ll find it hard to use without a full Server 2008.  Hyper-V is greatly improved over VirtualPC, but it implements a rather dated set of virtual hardware, and it really doesn’t perform as well as many other choices and it will only run on hardware that supports hardware virtualization (I-VT or AMD-V).

VMware (Windows, Mac, Linux) – I’ll lump all of their product into one and just say it’s over-priced and held together by chewing gum and band-aids.  I’d recommend you avoid it — even the free versions.

VirtualBox (Windows, Mac, Linux, bare metal) – Sun (now Oracle) produces a commercial and open source (community) edition of an extremely good virtualization solution.  Primarily targeted at desktops it implements a reasonably modern virtual machine, and will run on most any hardware.

Parallels (Windows, Mac, Linux, bare metal) – a very good virtualization solution, but it’s expensive — and it will continue to cost you money over and over again (upgrades are essential and not free between versions).  You can do much better for much less (like free).

QEMU (Windows, Linux, etc) – this is one of the oldest of the open source projects, and the root of many.  It’s simple, it works, but it’s not a good solution for most users.

Kernel-based Virtual Machines (KVM — don’t confuse it with Keyboard/Video/Mouse switches, the TLA is way overloaded) – this is the solution that Ubuntu (and other Linux distributions) choose for virtualization (though Ubuntu recommends VirtualBox for desktop virtualization).  KVM makes is moderately complicated to setup guest machines, but there are GUI add-ons as well as other tools that greatly simplify the tasks.

Xen (Linux) – an extremely good hypervisor implementation (the architecture of Hyper-V and Xen share many of the same fundamental designs), it will run Xen enabled (modified) kernels efficiently on any hardware, but requires hardware assisted virtualization for non-modified kernels (like Windows).

XenSource (bare-metal [Linux]) – this is a commercial product (though now available at no cost) acquired by Citrix which also includes a number of enterprise tools.  All the comments of Xen (above) apply with the addition that this package is ready (and supported) for enterprise applications and is cost effective is large and small deployments.


My personal choice remains VirtualBox for desktop virtualization on Windows, Mac, and Linux, but if I were setting up a virtual server I’d make sure I evaluated (and would likely choose) XenSource (it’s definitely now a much better choice than building a Hyper-V based solution).

Originally posted 2010-05-03 02:00:58.