Entries Tagged as 'Windows Server'

Windows Component Clean Utility

When you install Windows V6 SP2 you will also get the Component Clean Utility (compcln.exe).

This utility will remove previous component versions from your computer, saving disk space and reducing the size of the installation catalog.

The caveat is that once you remove previous components you will not be able to go back to them.

Before running this utility it’s prudent to insure that you computer is stable after the last update and to create a backup (using something like Acronis or with the included tool that comes with Vista).

Performing simple maintenance tasks and reducing the amount of “fluff” on your disk (remember, the disk clean tool is a good thing to run occassionally as well — and even the included disk defragmenter will help after a great deal of use [though not as much as something like O&O Defrag]) will help keep your computer running well and running fast[er].

Originally posted 2009-06-09 11:00:12.

Virtulization, Virtulization, Virtulization

For a decade now I’ve been a fan of virtulization (of course, that’s partially predicated on understanding what virtualization is, and how it works — and it’s limitation).

For software developers it offers a large number of practical uses… but more and more the average computer user is discovering the benefits of using virtual machines.

In Windows 7 Microsoft has built the “Windows XP” compatibility feature on top of virtualization (which means to use it you’ll need a processor that supports hardware virtualization — so many low end computers and notebooks aren’t going to have the ability to use the XP compatability feature).

While Windows 7 might make running older programs a seamless, you can (of course) install another virtualization package and still run older software.

Which virtualization package to choose???

Well, for me it’s an easy choice…

  • Windows Server 2008 on machines that have hardware virtualization – HyperV
  • Windows 7 on machines that have hardware virtualization – Virtual PC
  • All others (Windows, OS-X, Linux) – Virtual Box

Now, the disclaimers… if I were running a commercial enterprise; and I didn’t want to spend the money to buy Windows Server 2008, Microsoft does offer Windows Server 2008 – Virtual Server Edition for no cost (you really need one Windows Server 2008 in order to effectively manage it — but you can install the tools on Vista if you really don’t have it in your budget to buy a single license).

And no, I wouldn’t choose Linux OR OS-X as the platform to run a commercial virtualization infrastructure on… simply because device support for modern hardware (and modern hardware is what you’re going to base a commercial virtualization infrastructure on if you’re serious) is unparalleled PERIOD.

If you’re running Vista or Vista 64 you may decide to user Virtual PC ( a better choice would be Virtual Server 2005 R2); but Virtual Box is being actively developed, and it’s hardware reference for virtualization is much more modern (and I feel a better choice).

To make it simple… the choice comes down to Microsoft HyperV derived technology or Virtual Box.  Perhaps if I were a *nix biggot I’d put Xen in the loop, but like with so many Linux centric projects there are TOO MANY distributions, and too many splinter efforts.

One last note; keep in mind that you need a license for any operating system that you run in a virtual environment.

Originally posted 2009-08-12 01:00:34.

Online Capacity Expansion

Well…

  • Call me old fashion…
  • Call me conservative…
  • Call me a doubting “Thomas”…
  • Call me tickled pink…
  • Call me surprised…

I just finished adding four additional spindles to one of my virtual hosts; when I originally built it out I only had four spindles available, and didn’t want to buy more since I knew I would be freeing up smaller spindles for it soon.

The first task was to have the RAID software add the new spindles to the array, then to “expand” the array container… the first step took only a few moments, the second step took about 20 hours for the array controller to rebuild / expand the array.

The second task was to get Windows to actually use the added space by expanding the volume; to do that was a simple matter of using diskpart.exe (you can search Microsoft’s Knowledge Base) only took a few moments.

The incredible thing about this was that my virtual host and virtual machines was online for the entire 20 hours — with absolutely no service interruption.

This particular machine used a Dell / LSI controller; but the Promise controllers also support dynamic capacity expansion as do 3Ware controllers.  I believe the Intel Matrix pseudo RAID controller also support dynamic capacity expansion; but as with other RAID and pseudo-RAID controllers you should check the documentation specific to it and consult the manufacturer’s web site for errata and updates before proceeding.

The bottom line is Windows and RAID arrays have come a long way, and it’s quite possible that you will be able to expand the capacity of your array without taking your server down; however, if the data on the server is irreplaceable, I recommend you consider backing it up (at least the irreplaceable data).

Originally posted 2008-12-01 12:00:56.

Web Servers

For several years I’ve used a combination of Microsoft IIS and Apache, which fits in with my belief that you choose the best tool for the job (and rarely does one tool work best across the board).

About a month ago I “needed” to do some maintenance on my personal web server, and I started to notice the number of things that had been installed on it… like two versions of Microsoft SQL Server (why a Microsoft product felt the need to install the compact edition when I already had the full blown edition is beyond me).

As I started to peel  away layer upon layer of unnecessary software I realized that my dependency on IIS was one very simple ASP dot Net script I’d written for a client of mine and adapted for my own use (you could also say I’d written it for my use and adapted it for them).

I started thinking, and realized it would take me about ten minutes to re-write that script in PHP and in doing that I could totally eliminate my personal dependency on IIS and somewhat simplify my life.

In about half an hour (I had to test the script and there was more to uninstall) I had a very clean machine with about 8GB more of disk space, and no IIS… and the exact same functionality (well — I would argue increased functionality since there was far less software that I would have to update and maintain on the machine).

Sure, there are cases where ASP dot Net is a good solution (though honestly I absolutely cannot stand it or the development environment, it seems to me like an environment targeted at mediocre programmers who have no understanding of what they’re doing and an incredible opportunity for security flaws and bugs)… but many times PHP works far better, and for very complex solutions a JSP (Java Servlet / JavaServer Pages) solution would likely work better.

My advice, think through what your (technical) requirements are and consider the options before locking into proprietary solutions.

Originally posted 2010-03-24 02:00:33.

Ubuntu – Desktop Search

Microsoft has really shown the power of desktop search in Vista and Windows 7; their newest Desktop Search Engine works, and works well… so in my quest to migrate over to Linux I wanted to have the ability to have both a server style as well as a desktop style search.

So the quest begun… and it was as short a quest as marching on the top of a butte.

I started by reviewing what I could find on the major contenders (just do an Internet search, and you’ll only find about half a dozen reasonable articles comparing the various desktop search solutions for Linux)… which were few enough it didn’t take very long (alphabetical):

My metrics to evaluate a desktop search solutions would focus on the following point:

  • ease of installation, configuration, maintenance
  • search speed
  • search accuracy
  • ease of access to search (applet, web, participation in Windows search)
  • resource utilization (cpu and memory on indexing and searching)

I immediately passed on Google Desktop Search; I have no desire for Google to have more access to information about me; and I’ve tried it before in virtual machines and didn’t think very much of it.

Begal

I first tried Beagle; it sounded like the most promising of all the search engines, and Novel was one of the developers behind it so I figured it would be a stable baseline.

It was easy to install and configure (the package manager did most of the work); and I could use the the search application or the web search, I had to enable it using beagle-config:

beagle-config Networking WebInterface true

And then I could just goto port 4000 (either locally or remotely).

I immediately did a test search; nothing came back.  Wow, how disappointing — several hundred documents in my home folder should have matched.  I waited and tried again — still nothing.

While I liked what I saw, a search engine that couldn’t return reasonable results to a simple query (at all) was just not going to work for me… and since Begal isn’t actively developed any longer, I’m not going to hold out for them to fix a “minor” issue like this.

Tracker

My next choice to experiment with was Tracker; you couldn’t ask for an easier desktop search to experiment with on Ubuntu — it seems to be the “default”.

One thing that’s important to mention — you’ll have to enable the indexer (per-user), it’s disabled by default.  Just use the configuration tool (you might need to install an additional package):

tracker-preferences

Same test, but instantly I got about a dozen documents returned, and additional documents started to appear every few seconds.  I could live with this; after all I figured it would take a little while to totally index my home directory (I had rsync’d a copy of all my documents, emails, pictures, etc from my Windows 2008 server to test with, so there was a great deal of information for the indexer to handle).

The big problem with Tracker was there was no web interface that I could find (yes, I’m sure I could write my own web interface; but then again, I could just write my own search engine).

Strigi

On to Strigi — straight forward to install, and easy to use… but it didn’t seem to give me the results I’d gotten quickly with Tracker (though better than Beagle), and it seemed to be limited to only ten results (WTF?).

I honestly didn’t even look for a web interface for Strigi — it was way too much a disappointment (in fact, I think I’d rather have put more time into Beagle to figure out why I wasn’t getting search results that work with Strigi).

Recoll

My last test was with Recoll; and while it looked promising from all that I read, but everyone seemed to indicate it was difficult to install and that you needed to build it from source.

Well, there’s an Ubuntu package for Recoll — so it’s just as easy to install; it just was a waste of effort to install.

I launched the recoll application, and typed a query in — no results came back, but numerous errors were printed in my terminal window.  I checked the preferences, and made a couple minor changes — ran the search query again — got a segmentation fault, and called it a done deal.

It looked to me from the size of the database files that Recoll had indexed quite a bit of my folder; why it wouldn’t give me any search results (and seg faulted) was beyond me — but it certainly was something I’d seen before with Linux based desktop search.

Conclusions

My biggest conclusion was that Desktop Search on Linux just isn’t really something that’s ready for prime time.  It’s a joke — a horrible joke.

Of the search engines I tried, only Tracker worked reasonably well, and it has no web interface, nor does it participate in a Windows search query (SMB2 feature which directs the server to perform the search when querying against a remote file share).

I’ve been vocal in my past that Linux fails as a Desktop because of the lack of a cohesive experience; but it appears that Desktop Search (or search in general) is a failing of Linux as both a Desktop and a Server — and clearly a reason why choosing Windows Server 2008 is the only reasonable choice for businesses.

The only upside to this evaluation was that it took less time to do than to read about or write up!

Originally posted 2010-07-06 02:00:58.

Virtualization Solutions

On windows there’s basically three commercial solutions for virtualization, and several free solutions… wait one of the commercial solutions is free (well when you buy the operating system), and the other is partially free…

  • Microsoft Virtual PC (runs on both servers and workstations)
  • Microsoft Virtual Server (runs on both servers and workstations)
  • Microsoft Hyper-V (runs only one Windows Server 2008)
  • Parallels Workstation (runs on workstations)
  • Parallels Server (runs on both servers and workstations)
  • VMware Player (runs on both servers and workstations)
  • VMware Workstation (runs on both servers and workstations)
  • VMware Server (runs on both servers and workstations)
  • Citrix (aka XenSource)

For Intel based Mac you have commercial solutions

  • Parallels Desktop
  • Parallels Server
  • VMware Fusion

And for Linux you have the following commercial solutions, and many free solutions (Xen being one of the leaders)

  • Parallels Desktop
  • Parallels Server
  • VMware Player
  • VMware Workstation
  • VMware Server
  • Citrix (aka XenSource)

And for bare metal you have

  • Parallels Server
  • VMware

 

I’m not going to go into details on any of these, I just wanted to give at least a partial list with a few thoughts.

If you’re new to virtualization, use one of the free virtualization solutions.  You can try several of them, and many of them can convert a virtual machine from another vendor’s format to it’s own, but learn what the strengths and weaknesses are of each before you spend money on a solution that might not be the best for you.

Microsoft Virtual Server has some definite performance advantages over Microsoft Virtual PC… there are some things you might lose with Virtual Server that you might want (the local interface); but Virtual Server installs on both desktop and workstation platforms, so try it.

For Mac I definitely like Parallels Desktop better than VMware Fusion; but you may not share my opinion.  VMware claims to be faster, though I certainly don’t see it.  And I might add, that if you have a decent machine you’re running virtualization software on, fast isn’t going to be the number one concern — correctness is far more important.

Also, with each of the virtualization systems, hosts, and guests there are best practices for optimizing the installation and performance.  I’ll try and write up some information I’ve put together that keep my virtual machines running well.

For the record, I run Microsoft Virtual Server 2005 R2 (64 bit) on Windows Server 2003 R2 x64 SP2, and on Windows Vista Ultimate and Business x64 SP1; works well.  And I run Parallels Desktop v3 on my Macs.

For the most part my guests are Windows XP Pro (x86) and Windows Server 2003 (x86); I don’t really need 64-bit guests (at the moment), but I do also run Ubuntu, Debian, Red Hat, Free Spire, etc linux…

Like I said, figure out your requirements, play with several of the virtualization systems and spend your money on more memory, perhaps a better processor, and stick with the free virtualization software!

Originally posted 2008-05-18 20:25:18.

Windows 7 User Account Flaw

I’d say this is just an issue with Windows 7, but it’s actually been present in Windows and Windows Server since Vista…

Plainly put, the organization of information in Windows can become corrupt to the point that Windows is unable to create new users.

Really?

Windows (based on NT) is over a decade old… and to have such a basic flaw seems un-thinkable!

Let’s see, to create a user…

  1. Check to make sure the log-on identifier is unique;
  2. Create a security descriptor;
  3. Create a user home directory;
  4. Copy user default template files to the home directory;
  5. Apply the security descriptor to the user home directory and files; and
  6. Update the user database.

Seems pretty straight forward to me.

And not only is it an essential function of an operating system, but it’s one that we should have every expectation shouldn’t ever fail — and if it does, there should be a procedure to fix it.

Oh, there are procedures to fix it — in fact that are so many procedures you could probably re-install the operating system a hundred times before trying all of them… and there are more than one “Microsoft Fix-It” automated fixes as well, and trust me — your odds of winning the lottery are probably better than one of them actually resolving your issues.

All I can say is that regardless of the potential Windows might have, Microsoft’s actions indicate that it’s not intended to be anything more than a toy operating system — and never was.

Originally posted 2013-09-03 12:00:00.

“<app name> not installed for the current user.  Please run setup”

ARGH!!!

It doesn’t happen often, but it does happen — something goes wrong when shutting down Windows or logging in and all of a sudden you can’t launch the application.

Generally I’ve seen this with Microsoft Office applications or other Microsoft applications…

Here’s a list of things to try (this is probably the least invasive order, but look through the list and decide which you want to try first):

Look at the owner of the application; if it’s SYSTEM not administrator change the application and shortcut permissions to be read / writable by administrator (you may have to delete and recreate the short cuts).

  1. Uninstall the application, reboot, run a registry clean, reboot.
  2. Uninstall the application, reboot, run the Windows Install Cleanup Tool, reboot, run a registry clean, reboot.
  3. Delete the current user, and re-create the account (this will work if other users have no problem running the application, if all user accounts have the problem it’s not likely to work).
  4. If this is in Vista, turn of user access control (UAC), run as an account in the administrators group, and see if that resolves the problem (if it does, it’s got something to do with permissions and ownership — but it might be in the registry).
  5. Consider what type of plant you’d like to put in your planter.

If none of these work you can do an internet search and probably find lots more approaches; basically this related relates to either a corrupt user profile (generally you will be notified of Windows when you log on that it wasn’t able to restore the profile or settings) or if you could never run the app (and neither can any other user) it has something to do with permissions (most common in Vista).

For registry cleaners you can use a free piece of software, but I recommend you consider purchasing CleanMyPC:

You can find information on the Microsoft Windows Install CleanUp Utility here:

If you don’t know how to change permissions (ACLs) you might want to use a tool like SetACL:

Originally posted 2008-11-22 12:00:39.

Windows – Desktop Search

Most people realize how valuable Internet search engines are; but not everyone has figured out how valuable desktop (and server) search engines can be.

Even in corporate environments where data storage is highly organized it’s easy to forget where something is, or not know that someone else has already worked on a particular document — but if you could quickly and efficiently search all the public data on all the machines in your organization (or home) you could find those pieces of information you either misplaced or never knew about.

With Windows Search it just happens.  If you have access to a document, and you search — you can find it.  Open up a file explorer Window and point it at location you think it might be, type in the search box — and matching documents quickly appear (and those that don’t match disappear).  Do the same thing against a remote share – and it happens magically (the remote box does all the work).  It’s even possible to  be able to search multiple servers simultaneously – and it doesn’t require a rocket scientist to setup.

Windows Search is already on Windows 7 and Windows Server 2008 as well as Windows Vista (you’ll want to apply updates) — and easily installable on Windows XP and Windows Server 2003.  In fact, the defaults will probably do fine — just install and go (of course it will take a little while to index all your information).

A developer can fairly easily enhance search to include more document types using (plenty of examples, and it uses a model that Microsoft has employed in many parts of Windows)…   The search interface can be used via API, embedded in a web page, or just used directly from the search applet (which appears in auto-magically in Windows 7 and Windows Vista).

Very few Microsoft products are worth praise — but Windows Search is; and from my personal experience no competitor on any platform compares.

To those looking to write a “new” desktop search; look at Windows Search and understand what it does and how it works before you start your design.

Windows Search

Originally posted 2010-07-17 02:00:24.

Desktop Search

Let me start by saying that Windows Desktop Search is a great addition to Windows; and while it might have taken four major releases to get it right, for the most part it works and it works well.

With Windows Server 2008, Windows Vista, and Windows 7 Desktop Search is installed and enabled by default; and it works in a federated mode (meaning that you can search from a client against a server via the network).

Desktop Search, however, seems to have some issues with junction points (specifically in the case I’ve seen — directory reparse, or directory links).

The search index service seems to do the right thing and not create duplicates enteries when both the parent of the link and the target are to be indexed (though I don’t know how you would control whether or not the indexer follows links in the case where the target wouldn’t normally be indexed).

The search client, though, does not seem to properly provide results when junction points are involved.

Let me illustrate by example.

Say we have directory tree D1 and directory tree D2 and both of those are set to be indexed.  If we do a search on D1 it produces the expected results.  If we do a search on D2 it produces the expected results.

Now say we create a junction point (link) to D2 from inside D1 called L1.  If we do a search on L1 we do not get the same results as if we’d searched in D2.

My expectation would be that the search was “smart” enough to do the search against D2 (taking the link into consideration) and then present the results with the path altered to reflect the link L1.

I consider this a deficiency; in fact it appears to me to be a major failing since the user of information shouldn’t be responsible for understanding all the underlying technology involved in organizing the information — he should just be able to obtain the results he expects.

It’s likely the client and the search server need some changes in order to accommodate this; and I would say that the indexer also needs a setting that would force it to follow links (though it shouldn’t store the same document information twice).

If this were a third party search solution running on Windows my expectation would be that file system constructs might not be handled properly; but last time I checked the same company wrote the search solution, the operating system, and the file system — again, perhaps more effort should be put into making things work right, rather than making things [needlessly] different.

Originally posted 2010-01-22 01:00:57.