Thursday, September 30, 2010

In-the-Lab: Windows Server 2008 R2 Template for VMware

As it turns out, the reasonably simple act of cloning a Windows Server 2008 R2 (insert addition here) has been complicated by the number of editions, changes from 2008 release through 2008 R2 as well as user profile management changes since its release. If you're like me, you like to tweak your templates to limit customization steps in post-deployment. While most of these customizations can now be setup in group policies from AD, the deployment of non-AD members has become a lot more difficult - especially where custom defaults are needed or required.

Here's my quick recipe to build a custom image of Windows Server 2008 R2 that has been tested with Standard, Enterprise and Foundation editions.

Create VM, use VMXNET3 as NIC(s), 40GB "thin" disk, using 2008 R2 Wizard


This is a somewhat "mix to taste" step. We use ISO images and encourage their use. The size of the OS volume will end-up being somewhere around 8GB of actual space-on-disk after this step, making 40GB sound like overkill. However, the OS volume will bloat-up to 18-20GB pretty quick after updates, roles and feature additions. Adding application(s) will quickly chew-up the rest.

  • Edit Settings... ->

    • Options -> Advanced -> General -> Uncheck "Enable logging"

    • Hardware -> CD/DVD Drive 1 ->

      • Click "Datastore ISO File"

        • Browse to Windows 2008 R2 ISO image



      • Check "Connect at power on"



    • Options -> Advanced -> Boot Options -> Force BIOS Setup

      • Check "The next time the virtual machine boots, force entry into the BIOS setup screen"





  • Power on VM

  • Install Windows Server 2008 R2


Use Custom VMware Tools installation to disable "Shared Folders" feature:


It is important that VMware Tools be installed next, if for no other reason than to make the rest of the process quicker and easier. The additional step of disabling "Shared Folders" is for ESX/vSphere environments where shared folders are not supported. Since this option is installed by default, it can/should be removed in vSphere installations.

  • VM -> Guest -> Install VMware Tools ->

    • Custom -> VMware Device Drivers -> Disable "Shared Folder" feature



  • Retstart


Complete Initial Configuration Tasks:


Once the initial installation is complete, we need to complete the 2008 R2 basic configuration. If you are working in an AD environment, this is not the time to join the template to the domain as GPO conflicts may hinder manual template defaults. We've chosen a minimal package installation based on our typical deployment profile. Some features/roles may differ in your organization's template (mix to taste).

  • Set time zone -> Date and Time ->

    • Internet Time -> Change Settings... -> Set to local time source

    • Date and Time -> Change time zone... -> Set to local time zone



  • Provide computer name and domain -> Computer name ->

    • Enterprise Edition: W2K8R2ENT-TMPL

    • Standard Edition: W2K8R2STD-TMPL

    • Foundation Edition: W2K8R2FND-TMPL

    • Note: Don't join to a domain just yet...



  • Restart Later

  • Configure Networking

    • Disable QoS Packet Scheduler



  • Enable automatic updating and feedback

    • Manually configure settings

      • Windows automatic updating -> Change Setting... ->

        • Important updates -> "check for updates but let me choose whether to download and install them"

        • Recommended updates -> Check "Give me recommended updates the same way I receive important updates"

        • Who can install updates -> Uncheck "Allow all users to install updates on this computer"



      • Windows Error Reporting -> Change Setting... ->

        • Select "I don't want to participate, and don't ask me again"



      • Customer Experience Improvement Program -> Change Setting... ->

        • Select "No, I don't want to participate"







  • Download and install updates

    • Bring to current (may require several reboots)



  • Add features (to taste)

    • .NET Framwork 3.5.1 Feautures

      • Check WCF Activation, Non-HTTP Activation

        • Pop-up: Click "Add Required Features"





    • SNMP Services

    • Telnet Client

    • TFTP Client

    • Windows PowerShell Integrated Scripting Environment (ISE)



  • Check for updates after new features

    • Install available updates



  • Enable Remote Desktop

    • System Properties -> Remote

      • Windows 2003 AD

        • Select "Allow connection sfrom computers running any version of Remote Desktop"



      • Windows 2008 AD (optional)

        • Select "Allow connections only from computers runnign Remote Desktop with Network Level Authentication"







  • Windows Firewall

    • Turn Windows Firewall on of off

      • Home or work location settings

        • Turn off Windows Firewall



      • Public network location settings

        • Turn off Windows Firewall







  • Complete Initial Configuration Tasks

    • Check "Do not show this window at logon" and close




Modify and Silence Server Manager


(Optional) Parts of this step may violate your local security policies, however, it's more than likely that a GPO will ultimately override this configuration. We find it useful to have this disabled for "general purpose" templates - especially in a testing/lab environment where the security measures will be defeated as a matter of practice.

  • Security Information -> Configure IE ESC

    • Select Administrators Off

    • Select Users Off



  • Select "Do not show me this console at logon" and close


Modify Taskbar Properties


Making the taskbar usable for your organization is another matter of taste. We like smaller icons and maximizing desktop utility. We also hate being nagged by the notification area...

  • Right-click Taskbar -> Taskbar and Start Menu Properties ->

    • Taskbar -> Check "Use small icons"

    • Taskbar -> Customize... ->

      • Set all icons to "Only show notifications"

      • Click "Turn system icons on or off"

        • Turn off "Volume"





    • Start Menu -> Customize...

      • Uncheck "Use large icons"






Modify default settings in Control Panel


Some Control Panel changes will help "optimize" the performance of the VM by disabling unnecessary features like screen saver and power management. We like to see our corporate logo on server desktops (regardless of performance implications) so now's the time to make that change as well.

  • Control Panel -> Power Options -> High Performance

    • Change plan settings -> Turn off the display -> Never



  • Control Panel -> Sound ->

    • Pop-up: "Would you like to enable the Windows Audio Service?" - No

    • Sound -> Sounds -> Sound Scheme: No Sounds

    • Uncheck "Play Windows Startup sound"



  • Control Panel -> VMware Tools -> Uncheck "Show VMware Tools in the taskbar"

  • Control Panel -> Display -> Change screen saver -> Screen Saver -> Blank, Wait 10 minutes

  • Change default desktop image (optional)

    • Copy your desktop logo background to a public folder (i.e. "c:\Users\Public\Public Pictures")

    • Control Panel -> Display -> Change desktop background -> Browse...

    • Find picture in browser, Picture position stretch




Disable Swap File


Disabling swap will allow the defragment step to be more efficient and will disable VMware's advanced memory management functions. This is only temporary and we'll be enabling swap right before committing the VM to template.

  • Computer Properties -> Visual Effects -> Adjust for best performance

  • Computer Properties -> Advanced System Settings ->

    • System Properties -> Advanced -> Performance -> Settings... ->

    • Performance Options -> Advanced -> Change...

      • Uncheck "Automatically manage paging file size for all drives"

      • Select "No paging file"

      • Click "Set" to disable swap file






Remove hibernation file and set boot timeout


It has been pointed out that the hibernation and timeout settings will get re-enabled by the sysprep operation. Removing the hibernation files will help in defragment now. We'll reinforce these steps in the customization wizard later.

  • cmd: powercfg -h off

  • cmd: bcdedit /timeout 5


Disable indexing on C:


Indexing the OS disk can suck performance and increase disk I/O unnecessarily. Chances are, this template (when cloned) will be heavily cached on your disk array so indexing in the OS will not likely benefit the template. We prefer to disable this feature as a matter of practice.

  • C: -> Properties -> General ->

    • Uncheck "Allow files on this drive to have contents indexed in addition to file properties"

    • Apply -> Apply changes to C:\ only (or files and folders, to taste)




Housekeeping


Time to clean-up and prepare for a streamlined template. The first step is intended to aid the copying of "administrator defaults" to "user defaults." If this does not apply, just defragment.

Remove "Default" user settings:

  • C:\Users -> Folder Options -> View -> Show hidden files...

  • C:\Users\Default -> Delete "NTUser.*" Delete "Music, Pictures, Saved Games, Videos"


Defragment

  • C: -> Properties -> Tools -> Defragment Now...

    • Select "(C:)"

    • Click "Defragment disk"




Copy Administrator settings to "Default" user


The "formal" way of handling this step requires a third-party utility. We're giving credit to Jason Samuel for consolidating other bloggers methods because he was the first to point out the importance of the "unattend.xml" file and it really saved us some time. His blog post also includes a link to an example "unattend.xml" file that can be modified for your specific use, as we have.

  • Jason Samuel points out a way to "easily" copy Administrator settings to defaults, by activating the CopyProfile node in an "unattend.xml" file used by sysprep.

  • Copy your "unattend.xml" file to C:\windows\system32\sysprep

  • Edit unattend.xml for environment and R2 version

    • Update offline image pointer to correspond to your virtual CD

      • E.g. wim:d:... -> wim:f:...



    • Update OS offline image source pointer, valid sources are:

      • Windows Server 2008 R2 SERVERDATACENTER

      • Windows Server 2008 R2 SERVERDATACENTERCORE

      • Windows Server 2008 R2 SERVERENTERPRISE

      • Windows Server 2008 R2 SERVERENTERPRISECORE

      • Windows Server 2008 R2 SERVERSTANDARD

      • Windows Server 2008 R2 SERVERSTANDARDCORE

      • Windows Server 2008 R2 SERVERWEB

      • Windows Server 2008 R2 SERVERWEBCORE

      • Windows Server 2008 R2 SERVERWINFOUNDATION



    • Any additional changes necessary



  • NOTE: now would be a good time to snapshot/backup the VM

  • cmd: cd \windows\system32\sysprep

  • cmd: sysprep /generalize /oobe /reboot /unattend:unattend.xml

    • Check "Generalize"

    • Shutdown Options -> Reboot



  • Login

  • Skip Activation

  • Administrator defaults are now system defaults



  • Reset Template Name

    • Computer Properties -> Advanced System Settings -> Computer name -> Change...

      • Enterprise Edition: W2K8R2ENT-TMPL

      • Standard Edition: W2K8R2STD-TMPL

      • Foundation Edition: W2K8R2FND-TMPL



    • If this will be an AD member clone, join template to the domain now



    • Restart





  • Enable Swap files

    • Computer Properties -> Advanced System Settings ->

      • System Properties -> Advanced -> Performance -> Settings... ->

      • Performance Options -> Advanced -> Change...

        • Check "Automatically manage paging file size for all drives"







  • Release IP

    • cmd: ipconfig /release



  • Shutdown

  • Convert VM to template


Convert VM Template to Clone


Use the VMware Customization Wizard to create a re-usable script for cloning the template. Now's a good time to test that your template will create a usable clone. If it fails, go check the "red letter" items and make sure your setup is correct. The following hints will help improve your results.

  • Remove hibernation related files and reset boot delay to 5 seconds in Customization Wizard


  • Remember that the ISO is still mounted by default. Once VM's are deployed from the template, it should be removed after the customization process is complete and additional roles/features are added.


That's the process we have working at SOLORI. It's not rocket science, but if you miss an important step you're likely to be visited by an error in "pass [specialize]" that will have you starting over. Note: this also happens when your AD credentials are bad, your license key is incorrect (version/edition mismatch, typo, etc.) or other nondescript issues - too bad the error code is unhelpful...

Wednesday, September 29, 2010

Short-Take: Jeff Bonwick Leaves Oracle after Two Decades

Jeff Bonwick's last day at Oracle may be September 30, 2010 after two decades with Sun, but his contributions to ZFS and Solaris will live on through Oracle and open source storage for decades to come. In 2007, Bill Moore, Jeff Bonwick (co-founders of ZFS) and Pawel Jakub Dawidek (ported ZFS to FreeBSD) were interviewed by David Brown for the Association for Computing Machinery and discussed the future of file systems. The discussion gave good insights into the visionary thinking behind ZFS and how the designers set out to solve problems that would plague future storage systems.
One thing that has changed, as Bill already mentioned, is that the error rates have remained constant, yet the amount of data and the I/O bandwidths have gone up tremendously. Back when we added large file support to Solaris 2.6, creating a one-terabyte file was a big deal. It took a week and an awful lot of disks to create this file.

Now for comparison, take a look at, say, Greenplum’s database software, which is based on Solaris and ZFS. Greenplum has created a data-warehousing appliance consisting of a rack of 10 Thumpers (SunFire x4500s). They can scan data at a rate of one terabyte per minute. That’s a whole different deal. Now if you’re getting an uncorrectable error occurring once every 10 to 20 terabytes, that’s once every 10 to 20 minutes—which is pretty bad, actually.

- Jeff Bonwick, ACM Queue, November, 2007



But it's quotes like this from Jeff's blog in 2007 that really resonate with my experience:
Custom interconnects can't keep up with Ethernet.  In the time that Fibre Channel went from 1Gb to 4Gb -- a factor of 4 -- Ethernet went from 10Mb to 10Gb -- a factor of 1000.  That SAN is just slowing you down.

Today's world of array products running custom firmware on custom RAID controllers on a Fibre Channel SAN is in for massive disruption. It will be replaced by intelligent storage servers, built from commodity hardware, running an open operating system, speaking over the real network.

- Jeff Bonwick, Sun Blog, April 2007



My old business partner, Craig White, philosopher and network architect at BT let me in on that secret back in the late 90's. At the time I was spreading Ethernet across a small city while Craig was off to Level3 - spreading gigabit Ethernet across entire continents. He made it clear to me that Ethernet - in its simplicity and utility - was like the loyal mutt that never let you down and always rose to meet a fight. Betting against Ethernet's domination as an interconnect was like betting against the house: ultimately a losing proposition. While there will always be room for exotic interconnects, the remaining 95% of the market will look to Ethernet. Lookup "ubiquity" in the dictionary - it's right there next to Ethernet, and it's come a long way since it first appeared on Bob Metcalf's napkin in '73.

Looking back at Jeff's Sun blog, it's pretty clear that Sun's "near-death experience" had the same profound change on the his thinking; and perhaps that change made him ultimately incompatible with the Oracle culture. I doubt a culture that embraces the voracious acquisition and marketing posture of former HP CEO Mark Hurd would likewise embrace the unknown risk and intangible reward framework of openness.
In each case, asking the question with a truly open mind changed the answer.  We killed our more-of-the-same SPARC roadmap and went multi-core, multi-thread, and low-power instead.  We started building AMD and Intel systems.  We launched a wave of innovation in Solaris (DTrace, ZFS, zones, FMA, SMF, FireEngine, CrossBow) and open-sourced all of it.  We started supporting Linux and Windows.  And most recently, we open-sourced Java.  In short, we changed just about everything.  Including, over time, the culture.

Still, there was no guarantee that open-sourcing Solaris would change anything.  It's that same nagging fear you have the first time you throw a party: what if nobody comes?  But in fact, it changed everything: the level of interest, the rate of adoption, the pace of communication.  Most significantly, it changed the way we do development.  It's not just the code that's open, but the entire development process.  And that, in turn, is attracting developers and ISVs whom we couldn't even have spoken to a few years ago.  The openness permits us to have the conversation; the technology makes the conversation interesting.

- Jeff Bonwick, Sun blog, April 2007



This lesson, I fear, cannot be unlearned, and perhaps that's a good thing. There's an side to an engineer's creation that goes way beyond profit and loss, schedules and deadlines, or success and failure. This side probably fits better in the subjective realm of the arts than the objective realm of engineering and capitalism. It's where inspiration and disruptive ideas abide. Reading Bonwick's "fairwell" posting, it's clear to see that the inspirational road ahead has more allure than recidivism at Oracle. I'll leave it in his words:
For me, it's time to try the Next Big Thing. Something I haven't fully fleshed out yet. Something I don't fully understand yet. Something way outside my comfort zone. Something I might fail at. Everything worth doing begins that way. I'll let you know how it goes.

- Jeff Bonwick, Sun blog, September 2010


Saturday, September 18, 2010

Short-Take: OpenSolaris mantle assumed by Illumos, OpenIndiana

While Oracle is effectively "closed the source" to key Solaris code by making updates available only when "full releases" are distributed, others in the "formerly OpenSolaris" community are stepping-up to carry the mantle for the community. In an internal memo - leaked to the OpenSolaris news group last month - Oracle makes the new policy clear:
We will distribute updates to approved CDDL or other open source-licensed code following full releases of our enterprise Solaris operating system. In this manner, new technology innovations will show up in our releases before anywhere else. We will no longer distribute source code for the entirety of the Solaris operating system in real-time while it is developed, on a nightly basis.

- Oracle Memo to Solaris Engineering, Aug, 2010



Frankly, Oracle clearly sees the issue of continuous availability to code updates as a threat to its control over its "best-of-breed" acquisition in Solaris. It will be interesting to see how long Oracle takes to reverse the decision (and whether or not it will be too late...)

However, at least two initiatives are stepping-up to carry the mantle of "freely accessible and open" Solaris code to the community: Illumos and OpenIndiana. Illumos' goal can be summed-up as follows:
Well the first thing is that the project is designed here to solve a key problem, and that is that not all of OpenSolaris is really open source. And there's a lot of other potential concerns in the community, but this one is really kind of a core one, and from solving this, I think a lot of other issues can be solved.

- Excerpt, Illumos Announcement Transcript



That said, it's pretty clear that Illumos will be a distinct fork away from "questionable" code (from a licensing perspective.) We already see a lot of chatter/concerns about this in the news/mail groups.

The second announcement comes from thje OpenIndiana group (part of the Illumos Foundation) and appears to be to Solaris as CentOS is to RedHat Enterprise Server. OpenIndiana's press release says it like this:
OpenIndiana, an exciting new distribution of OpenSolaris, built by the community, for the community - available for immediate download! OpenIndiana is a continuation of the OpenSolaris legacy and aims to be binary and package compatible with Oracle Solaris 11 and Solaris 11 Express.

- OpenIndiana Press Release, September 2010

Does any of this mean that OpenSolaris is going away or being discontinued? Strictly speaking: no - it lives on as Solaris 11 Express, et al. It does means control of code changes will be more tightly controlled by Oracle, and - from the reaction of the developer community - this exertion of control may slow or eliminate open source contribution to the Solaris/OpenSolaris corpus. Further, Solaris 11 won't be "free for production use"as earlier versions of Solaris were. It also means that distributions and appliance derivatives (like NexentaStor and Nexenta Core) will be able to thrive despite Oracle's tightening.

Illumous has yet to release a distribution. OpenIndiana has distributions available for download today.

Friday, September 17, 2010

Quick-Take: ZFS and Early Disk Failure

Anyone who's discussed storage with me knows that I "hate" desktop drives in storage arrays. When using SAS disks as a standard, that's typically a non-issue because there's not typically a distinction between "desktop" and "server" disks in the SAS world. Therefore, you know I'm talking about the other "S" word - SATA. Here's a tale of SATA woe that I've seen repeatedly cause problems for inexperienced ZFS'ers out there...

When volumes fail in ZFS, the "final" indicator is data corruption. Fortunately, ZFS checksums recognize corrupted data and can take action to correct and report the problem. But that's like treating cancer only after you've experienced the symptoms. In fact, the failing disk will likely begin to "under-perform" well before actual "hard" errors show-up as read, write or checksum errors in the ZFS pool. Depending on the reason for "under-performing" this can affect the performance of any controller, pool or enclosure that contains the disk.

Wait - did he say enclosure? Sure. Just like a bad NIC chattering on a loaded network, a bad SATA device can occupy enough of the available service time for a controller or SAS bus (i.e. JBOD enclosure) to make a noticeable performance drop in otherwise "unrelated" ZFS pools. Hence, detection of such events is an important thing. Here's an example of an old WD SATA disk failing as viewed from the NexentaStor "Data Sets" GUI:

[caption id="attachment_1660" align="aligncenter" width="450" caption="Something is wrong with device c5t84d0..."]Disk Statistics showing failing drive[/caption]

Device c5t84d0 is having some serious problems. Busy time is 7x higher than counterparts, and its average service time is 14x higher. As a member of a RAIDz group, the entire group is being held-back by this "under-performing" member. From this snapshot, it appears that NexentaStor is giving us some good information about the disk from the "web GUI" but this assumption would not be correct. In fact, the "web GUI" is only reporting "real time" data so long as the disk is under load. In the case of a lightly loaded zpool, the statistics may not even be reported.

However, from the command shell, historic and real-time access to per-device performance is available. The output of "iostat -exn" shows the count of all errors for devices since the last time counters were reset, and average I/O loads for each:

[caption id="attachment_1662" align="aligncenter" width="450" caption="Device statistics from 'iostat' show error and I/O history."]Device statistics from 'iostat' show error and I/O history.[/caption]

The output of iostat clearly shows this disk has serious hardware problems. It indicates hardware errors as well as transmission errors for the device recognized as 'c5t84d0' and the I/O statistics - chiefly read, write and average service time - implicate this disk as a performance problem for the associated RAIDz group. So, if the device is really failing, shouldn't there be a log report of such an event? Yes, and here's a snip from the message log showing the error:

[caption id="attachment_1663" align="aligncenter" width="450" caption="SCSI error with ioc_status=0x8048 reported in /var/log/messages for failing device."]SCSI error with ioc_status=0x8048 reported in /var/log/messages[/caption]

However, in this case, the log is not "full" with messages of this sort. In fact, it only showed-up under the stress of an iozone benchmark (run from the NexentaStor 'nmc' console). I can (somewhat safely) conclude this to be a device failure since at least one other disk in this group is of the same make, model and firmware revision of the culprit. The interesting aspect about this "failure" is that it does not result in a read, write or checksum error for the associated zpool. Why? Because the device is only loosely coupled to the zpool as a constituent leaf device, and it also implies that the device errors were recoverable by either the drive or the device driver (mapping around a bad/hard error.)

Since these problems are being resolved at the device layer, the ZFS pool is "unaware" of the problem as you can see from the output of 'zpool status' for this volume:

[caption id="attachment_1661" align="aligncenter" width="450" caption="Problems with disk device as yet undetected at the zpool layer."]zpool status output for pool with undetected failing device[/caption]

This doesn't mean that the "consumers" of the zpool's resources are "unaware" of the problem, as the disk error has manifested itself in the zpool as higher delays, lower I/O through-put and subsequently less pool bandwidth. In short, if the error is persistent under load, the drive has a correctable but catastrophic (to performance) problem and will need to be replaced. If, however, the error goes away, it is possible that the device driver has suitably corrected for the problem and the drive can stay in place.

SOLORI's Take: How do we know if the drive needs to be replaced? Time will establish an error rate. In short, running the benchmark again and watching the error counters for the device will determine if the problem persists. Eventually, the errors will either go away or they wont. For me, I'm hoping that the disk fails to give me an excuse to replace the whole pool with a new set of SATA "eco/green" disks for more lab play. Stay tuned...

SOLORI's Take: In all of its flavors, 1.5Gbps, 3Gbps and 6Gbps, I find SATA drives inferior to "similarly" spec'd SAS for just about everything. In my experience, the worst SAS drives I've ever used have been more reliable than most of the SATA drives I've used. That doesn't mean there are "no" good SATA drives, but it means that you really need to work within tighter boundaries when mixing vendors and models in SATA arrays. On top of that, the additional drive port and better typical sustained performance make SAS a clear winner over SATA (IMHO). The big exception to the rule is economy - especially where disk arrays are used for on-line backup - but that's another discussion...

Wednesday, September 15, 2010

Short-Take: SQL Performance Notes

Here are some Microsoft SQL performance notes from discussions that inevitably crop-up when discussing SQL storage:

  1. Where do I find technical resources for the current version of MS SQL?

  2. I'm new to SQL I/O performance, how can I learn the basics?

  3. The basics talk about SQL 2000, but what about performance considerations due to changes in SQL 2005?

  4. How does using SQL Server 6.x versus SQL Server 7.0 and change storage I/O performance assumptions?

  5. How does TEMPDB affect storage (and memory) requirements and architecture?

  6. How does controller and disk caching affect SQL performance and data integrity?

  7. How can I use NAS for storage of SQL database in a test/lab environment?

  8. What additional considerations are necessary to implement database mirroring in SQL Server?

  9. When do SQL dirty cache pages get flushed to disk?

  10. Where can I find Microsoft's general reference sheet on SQL I/O requirements for more information?


From performance tuning to performance testing and diagnostics:

  1. I've heard that SQLIOStress has been replaced by SQLIOSim: where can I find out about SQLIOSim to evaluate my storage I/O system before application testing?

  2. How do I diagnose and detect "unreported" SQL I/O problems?

  3. How do I diagnose stuck/stalled I/O problems in SQL Server?

  4. What are Bufwait and Writelog Timeout messages in SQL Server indicating?

  5. Can I control SQL Server checkpoint behavior to avoid additional I/O during certain operations?

  6. Where can I get the SQLIO benchmark tool to assess the potential of my current configuration?


That should provide a good half-day's reading for any storage/db admin...

Tuesday, September 14, 2010

Short-Take: iSCSI+Nexenta, Performance Notes

Here are a few performance tips for running iSCSI with NexentaStor in a Windows environment:

  1. When using the Windows iSCSI Software Initiator with some workloads, disabling the Nagle Algorithm on the Windows Server is sometimes recommended;

  2. Tuning TCP window and iSCSI parameters on each side of the connection can deliver better performance;

  3. VMware part of the equation? Adjusting the way VMware handles congestion events could be useful;

  4. On NexentaStor, disable the Nagle Algorithm with a value of "1" (default, 4095, enabled)


For storage applications where latency is a paramount issue, these hints just might help...