Less Than 1 Week Until the End of Support for Legacy versions of Internet Explorer (except Windows Vista and Server 2008)


Here we are! It is almost time. Over 16 months ago, Microsoft announced that support for legacy versions of Internet Explorer would be ending on January 12th, 2016 (http://blogs.msdn.com/b/ie/archive/2014/08/07/stay-up-to-date-with-internet-explorer.aspx.) The hour is almost upon us. In addition to the announcement, technologies including Enterprise Mode, Compatibility View, and persistent emulation modes were addedenhanced to assist customers in bringing older sites and web applications over to remove deployment blockers to IE11 and ultimately, Windows 10. Most of our customers in the enterprise have already leveraged (or are currently in the process of leveraging) these technologies.

If you are still running on older versions, you will soon notice that there is a warning message that will start appearing. In December, Microsoft published an article (https://support.microsoft.com/en-us/kb/3123303) that lays out the details of a new "End of Life" upgrade notification for Internet Explorer, which will be shipped as an update next week on January 12th.

The update will apply to Windows 7 SP1 and Windows Server 2008 R2 for users who have not upgraded to Internet Explorer 11 (i.e. IE8, IE9, and IE10 users). The update includes a new “end of support” notification feature when the browser is launched. This will automatically open a new tab with the appropriate download page (http://windows.microsoft.com/en-us/internet-explorer/download-ie) for your particular operating system.

For those enterprise customers that are still in the process of deploying and migration to Internet Explorer 11 (or have arranged for a custom support agreement) the KB article mentioned above also lays out instructions for disabling the notifications.

For those customers that are still on Windows Vista and Windows 2008 (which are in extended support and do not support IE11) – those operating systems will not be affected by the update. IE9 is still the latest version of Internet Explorer supported by these operating systems. Windows 8 and Windows 8.1 are also unaffected (support for Windows 8 will end on January 12th and Windows 8.1 comes with IE11).

The notification tab will not appear on every launch of the browser. After the tab is closed it will be 72 hours before it is shown again and only when launching IE (i.e. not during a browsing session).

For more information about the end of support for old versions of Internet Explorer see the following links: https://www.microsoft.com/en-us/WindowsForBusiness/End-of-IE-support and https://support.microsoft.com/en-us/lifecycle#gp/Microsoft-Internet-Explorer page. For technical information about how to upgrade to Internet Explorer 11 and Microsoft Edge see the Browser TechCenter pages on TechNet (https://technet.microsoft.com/en-us/browser.)

For my Friends and Family: You have no excuse not to secure your Microsoft Accounts with Multi-Factor Authentication

November 27, 2015 Leave a comment

I am always begging my close friends and family, many who are not all that technical, to follow basic tenants for securing their digital worlds. From changing their passwords on a regular basis (even having them schedule it to coincide with Daylight Savings Time/Standard Time conversions a la “smoke detector battery changes) to keeping their operating systems and anti-virus software up to date, I warn them that risks are not just for enterprises and governments. In fact, in the past six months, the following has happened to me:

  • A good friend of my mother (a female) begins sending me Webcam spam from her Skype account.
  • An old high school friend (another female) begin sending out large organ pics (male) to everyone on their Facebook friends list.
  • My sister got hit with some serious ransomware. All of her pictures are encrypted with a $500 dollar ransom. She’s still running Windows XP.

Given that my primary accounts for personal use involve Microsoft services and accounts – and I work for Microsoft, I feel compelled to evangelize the fact that all of your Microsoft online accounts (Hotmail, Live, Outlook.com, Office365) can be protected via multi-factor authentication.

 

What is Multifactor Authentication? It is simply a method of authentication that involves at least two disparate factors for authentication. In most cases, single factor authentication involves a simple password for verification of identity. This is the oldest and one of the most archaic and insecure methods of verifying identity. When you enable multifactor authentication, even after submitting a correct password, additional steps are taken to verify you are who you say you are. You may have to do this when you sign on to a web site from an unknown or previous unknown location. In some cases, you may have to answer additional security questions (not the best additional factor but indeed and additional factor) or enter a text code sent to your mobile phone (much more secure secondary factor.)

 

In the case of Microsoft account, the following FAQ answers your questions about the options available

http://windows.microsoft.com/en-us/windows/two-step-verification-faq

If you want to enable multifactor authentication, you can do so under your account profile here:

https://account.live.com/proofs/Manage

If you are accessing Hotmail, Live, Outlook.com from Outlook 2010, 2013, 2016, you will need to set up app passwords (app-specific passwords) after you enable two-step/multifactor authentication

http://windows.microsoft.com/en-us/windows/app-passwords-two-step-verification

An excellent post on Channel 9 along the same lines:

https://channel9.msdn.com/posts/Multi-Factor-Account-Setup

The Authenticator App for Windows Phone gives you codes to use: 

https://www.microsoft.com/en-US/store/apps/Authenticator/9WZDNCRFJ3RJ

This blog post walks you through the process: 

http://blogs.technet.com/b/mspfe/archive/2013/10/02/how-to-use-the-microsoft-authenticator-app-for-windows-phone-to-enable-two-factor-authentication-on-facebook.aspx

https://support.office.com/en-us/article/Set-up-multi-factor-authentication-for-Office-365-8f0454b2-f51a-4d9c-bcde-2c48e41621c6

If you are using an Android phone, the Microsoft Account app will also allow for verification through a one-touch app.

https://play.google.com/store/apps/details?id=com.microsoft.msa.authenticator 

FAQ on additional identity apps verification

http://windows.microsoft.com/en-US/Windows/identity-verification-apps-faq

 

Categories: Uncategorized

Farewell to Zune

November 14, 2015 3 comments

As I write this, within a few hours, the Zune Service is expected to end per earlier announcements. What exactly will happen with the functionality of the Zune 4.8 software, will be only that limited functionality will remain.

I am sad. I loved the Zune player – especially the ZuneHD. I still use the ZuneHD rather than the phone because of the storage space, and the fact that battery consumption is way better on the ZuneHD player than any phone I have used or seen.

zune

It is likely that download subscription content will start to fail at some point once media usage rights need to be re-queried. All other DRM-free MP3/WMA media should still play as expected. I imagine that the device sync will still work as well. I had the pleasure of keeping the 10-song-a-month feature thanks to the grandfather policy. Since this will be ending, I made sure to use my song credits this last time. The songs I chose were:

  • Lou Reed – What’s Good
  • Roxy Music – Avalon
  • Wendy Bagwell – Here Come the Rattlesnakes
  • Warren Zevon – Boom-Boom Mancini
  • The Cramps – I was a Teenage Werewolf
  • Blondie – X-Offender
  • Tom Petty – Straight Into Darkness
  • Deep Purple – Hush
  • Deep Purple – Smoke on the Water
  • Deep Purple – Highway Star

(Yes, I have an eclectic variety of tastes)

So What Happens Next?

Per the following KB article: https://support.microsoft.com/en-us/kb/3096659

Existing Zune Services will be converted to Groove Music (formerly XBOX Music) – not to be confused with that other software Microsoft acquired over a decade ago. I’ll be trying to use my ZuneHD with this service.

zunes

I used every Zune device that was released and still have them – including the original Zune30 from 2006. I am somewhat sad.

Categories: Uncategorized

App-V 5.x Client Publishing Server Address

October 22, 2015 1 comment

Hi all,

I've had the pleasure of working with the Gladiator for the past 5 years and now I have the opportunity to provide some great information about App-V 5.x to the world.

What we have been seeing out in the field is that there seems to be some confusion on how to view the App-V 5.x Publishing Server XML correctly, so this should walk you through how to view the metadata from a users perspective.

Firstly if you query the Publishing Server using “Get-AppvPublishingServer” the following will be returned.

You can then take the URL and paste it in a browser to see which packages and connection groups are entitled to you.

The misconception here is that your meant to see the connection group as your in the AD Group but it can’t be seen?

The App-V 5.x client is actually passing parameters to the string so that it can see the correct data, you can review what is passed here: https://technet.microsoft.com/en-us/library/dn858700.aspx#BKMK_pub_metadata_clientversion.

Reviewing the article there are two parameters ClientVersion and ClientOS which will show a true representation of user entitlement which is what you should be using to query the publishing server.

Note: The ClientOS is used so that we can show or hide the OS restrictions set within the App-V Package.

Hopefully to make your life easier run the following Powershell script on the App-V Client which will build the Client URL for you.

Import-module AppvClient

write-host

$AppVPS = Get-AppvPublishingServer | select URL
$url = $AppVPS.URL

$AppV_Version = (Get-ItemProperty HKLM:SoftwareMicrosoftAppVClient -Name Version).Version

$Ver = (Get-ItemProperty 'HKLM:SOFTWAREMicrosoftWindows NTCurrentVersion' -Name CurrentVersion).CurrentVersion

$Model_Check = (Get-WmiObject -class Win32_OperatingSystem).Caption

if($Model_Check.Contains("Server")){ $Model = "Server" } else { $Model = "Client" }

if ((Get-WmiObject -Class Win32_OperatingSystem).OSArchitecture -eq '64-bit'){ $architecture = "x64"} else { $architecture = "x86" }

$ClientURL = $url + "?ClientVersion=" + $AppV_Version + "&ClientOS=Windows" + $Model + "_" + $Ver + "_" + $architecture

$ClientURL

 Then copy the $ClientURL returned value and paste it into your browser, you should see a different result including the connection group which you couldn’t see before.

Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

Hope this helps in understanding what a user is actually entitled to.

David Falkus | Premier Field Engineer | Application Virtualization

Planning for Windows iSCSI SAN boot on Private Cloud Bare Metal Hosts


Data Center Modernization has definitely reached critical mass. The message that came from TechEd 2013 was “It’s time to make Hybrid Cloud Real.” That, of course, starts with the modernizing your data center to be able to implement private clouds. On top of that, more and more data centers are migrating their hypervisors to Hyper-V in spite of the greater footprint a full Windows Server operating system has on the bare metal. The feature parity as well as cost savings that comes from Hyper-V as a feature (and the subsequent removal of the VMWare tax) offsets the hassle of the additional footprint.

Windows Server bare metal hosts running Hyper-V, like other hypervisors, support SAN boot of the operating system drive using iSCSI. It is important to realize that the iSCSI services depend on the underlying storage and iSCSI network being provisioned properly to accommodate the eccentricities of how Windows boots from SAN using network interface cards in place of traditional storage adapters or HBAs.

Understand the Supportability Parameters

The supportability of the storage support comes from the storage vendor. This also extends to iSCSI boot SAN scenarios per the KB article: https://support.microsoft.com/en-us/kb/305547/en-us. Even though the article does not mention Windows Server 2012 (or R2) it is still in place. Normally, this would not be complicated but in the case of iSCSI networks, the device may likely be using a NIC to locate the storage (especially if they are actually using NAS – network attached storage – i.e. NetApp) and not a traditional storage adapter or HBA.

Slipstream your 3rd-party drivers if possible

The use of slipstreamed NIC/Storage drivers in the installation ISO will prevent any timing issues from swapping back and forth between driver media and OS media. The may be especially the case if you are controlling headless blade devices using KVM or some other solution. I have found that this resolves many of the issues outlined in this particular KB: https://support.microsoft.com/en-us/kb/2826787 – as well as the 0x80070057 error message when trying to format drives or create partitions during the operating system setup.

No Thin-Provisioning LUNs for the OS Boot Drive

LUNS on the NAS devices (i.e. NetApp Devices) need to be thick provisioned for the drive containing the OS instead of thin-provisioned. In addition LUNS for the host OS boot volume only should be 127GB or less. Remember this is only in the context of the LUN being used for host devices iSCSI boot volume.

Avoid using Default Gateways for iSCSI NICs

The NICs configured for the iSCSI SAN should avoid having a default gateway. This can cause issues such as slow throughput occurring during formatting of disks and the copying of files during installation. This has been an issue with the Windows iSCSI initiator in the past and has previous appeared in KB articles:

960104: If you start a system from iSCSI, the gateway specified in the iSCSI Boot solution will always be used by Windows to communicate with the iSCSI Target

http://support.microsoft.com/kb/960104/EN-US  

2727330: Default gateway is set to 0.0.0.0 if you start a Windows Vista-based, Windows 7-based, Windows Server 2008-based or Windows Server 2008 R2-based computer from an iSCSI boot device

http://support.microsoft.com/kb/2727330/EN-US  

In addition, the network ports connecting to the boot volume iSCSI interfaces on the iSCSI network’s switch should have ICMP redirect disabled.

If all else fails . . . revert to the old way!

If the interactive installation still fails, remember – there is the legacy way of deploying Windows Servers in an iSCSI SAN boot configuration outlined in:

https://technet.microsoft.com/en-us/library/ee619722%28v=ws.10%29.aspx

On the Bloggers, Analysts, and the Shareholders.


Over the past 20 years, I have had many different roles in IT. I’ve been a helpdesk jockey, professor/instructor, sysadmin, developer, support engineer, escalation engineer, and now consultant. I’ve worked with a variety of industries as well. I’ve been both a customer and an employee of a Fortune 100 software company. As I have moved into various roles through my career, I’ve simultaneously watched the growth of the IT community in pontificating in various mediums ranging from community forums to full-blown tabloid tech journalism. I’ve learned what kind of statements garner respect and attention and what are often dismissed as hyperbole or sensationalism.

The Bloggers

The bloggers are supposed to represent the users and/or IT pros – the “pulse” of the community. In many cases the quality of the bloggers are positive as they derived excellent content and insights due to one or more of the following factors:

Experience: A blogger will likely be taken seriously if they have the experience to back up what they are talking about. This is why the best insights often come buried deep inside of community forums and not necessarily on the site of a full-time blogger or tech journal. Why? Because blogging is not their job. They ARE an IT Pro. Blogging is merely a hobby.

Depth of Analytical Thought: They demonstrate an outstanding aptitude for critical thinking. Even if the source is focused towards a specific vendor (or as many say – biased) the analysis is spot on.

Depth of Technical Thought: Simply – they know the technology inside and out. They yield a wealth of technical information and for that reason alone, they often command respect.

I am here to tell you the influence bloggers have on software vendors and products often depend on how they engage and embrace the community around the vendor and its products – regardless of how they may “bash” a product or feature or “praise” it. If the community respects the blogger, their stature increases with the software vendor. If the blogger is simply ranting or spilling out hyperbole for the sole purpose of “click-bait,” that can come back to haunt them. This is often a challenge for full-time bloggers who are often selling advertisements to generate revenue or perhaps are freelancing for a journal who pays them literally by the click.

The Analysts

When you build up that large amount of overhead you need to keep those clicks and ad views going, the blogger has no choice but to be a provocateur to remain relevant in the IT tabloid media that those same bloggers helped to create. When an IT analyst or an IT research firm publishes opinions or assessments, they are always taken more seriously as they represent a wealth of combined experiences and knowledge bases. They approach product, technology, and industry analysis in a much more scientific and data-driven process. The research firms publish both the analytical and the technical depth in every case.

The Shareholders

Since most major software vendors, at least in the US, are publicly traded, it is Wall Street that ultimately has the most influence on its direction. in IT, your shareholders are often your customers as well.

The Inspiration

I’d been wanting to write an article on this subject for a while, but this week, I was inspired to the write this article after reading three distinct articles relating to RDS/VDI –a technology I worked in extensively. I have the unique opportunity to cite examples of an attempt of influence by a blogger, a group of analysts, and a group of investors in a very busy week for the VDI industry.

The Blogger:

http://www.brianmadden.com/blogs/brianmadden/archive/2015/06/10/server-os-based-vdi-is-an-official-quot-feature-quot-of-windows-server-2016-apparently-microsoft-plans-to-continue-screwing-us-for-years-to-come.aspx – Basically, Brian Madden still hates how Microsoft does VDI. In other news, the Sun came out this morning.

The Analysts: http://www.gartner.com/document/3072722 – a brutally honest assessment by Gartner on why VDI is not ready for the cloud and what it will take to get VDI to a true cloud-based DaaS (Desktop as a Service.)

The Investors: http://www.valuewalk.com/2015/06/citrix-systems-inc-ctxs-elliott-associates-letter/ The investment Group Elliott Management reveal their desires for change at Citrix (the leader in VDI) in an open letter to its CEO and Board of Directors.

Which of those three articles that I mentioned do I pay the most attention to? Well, I always trust analysis over hyperbole – but money trumps all.

Categories: Uncategorized

App-V: On App-V Applications Hosted in Azure RemoteApp

April 28, 2015 6 comments

With the release of Azure RemoteApp, Enterprise customers can now move their non-persistent RDS session-hosted applications from the on-premises data centers into a hosted cloud – with the Azure platform providing all of the necessary image provisioning and updating services. With Azure RemoteApp, you can use gallery templates or your own custom image. In addition to your own custom image, you can leverage virtual applications using App-V. With App-V, you can reduce the size of your custom image uploads by streaming the content on-demand.

Right now, App-V support in Azure RemoteApp is limited and licensed to only hybrid collection deployments. This is due to the current licensing requirement of App-V needing to be on domain-joined computers. While you could use a cloud collection to test a virtual application, in order to take advantage of the image reduction features of App-V with Azure RemoteApp – and to have full supportability and license compliance, the implementation within Azure RemoteApp would need to be joined to a domain within a hybrid collection deployment using a Site-to-Site VPN.

Setting Up Azure RemoteApp Images

Before you set up your image for Azure RemoteApp, you will need to first set up your Azure RemoteApp Subscription at https://www.remoteapp.windowsazure.com/. In addition, you will need to set up Azure PowerShell on the machine where you will be uploading the image. You can download Azure PowerShell here at the following link:

http://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/#Install

There is also existing guidance for configuring a custom RemoteApp image for uploading:

http://azure.microsoft.com/en-us/documentation/articles/remoteapp-create-custom-image/

Make sure you follow everything specified in the documentation and no steps are missing when configuring the VHD including disabling encryption and ensuring the partitions are MBR-based. For App-V considerations there are some additional steps that you will need to ensure are included with regards to configuring and preparing the image.

Configuring App-V Client and Pre-requisites

  • In Server Manager, make sure .NET 3.5 and 4.5 Services are configured as features for Windows Server 2012 R2.
  • Install the most recent App-V 5 Client.
  • Install the App-V Client pre-requisites here: https://technet.microsoft.com/en-us/library/jj713458.aspx
  • Configure the App-V Client as required (script enablement, etc.)

Publishing Applications

After the App-V Client has been configured, you will need to add and globally publish your virtual applications using PowerShell. You can do this using the built-in App-V PowerShell Cmdlets referenced here: http://technet.microsoft.com/en-us/library/dn508409.aspx. Whether you are using hybrid or cloud deployments, only globally published applications will fully survive the generalization (as well as picked up by the RemoteApp provisioning) so it is currently a hard requirement.

Testing and Final Preparation

You should test and verify your applications within the image prior to uploading your image. Finally, before generalizing your image with the SysPrep tool, you will need to perform a current workaround that involves an issue with App-V and SysPrep. You will need to stop the AppV Client Service and delete the local VFS Folder under Local AppData (%LOCALAPPDATA%MicrosoftAppVClientVFS.)

Also remember, if the image you are uploading is drastically behind in operating system updates, it will further delay provisioning after uploading.

The last thing you will need to do is generalize the image using the command line:

C:WindowsSystem32SysprepSysprep.exe /generalize /oobe /shutdown

Creating the Collection

You will need to create an Azure RemoteApp collection to house the image and published applications from that image. You can use this quick reference for the details: http://azure.microsoft.com/en-us/documentation/articles/remoteapp-create-cloud-deployment/

In order to upload your custom image containing your virtual applications, in the collection dialog, you will need to click “Template Images.” You will then specify to upload a RemoteApp template image:

After you have given the name and location, it will take you to the next screen where you will download a PowerShell script that you will use to upload your VHD to the correct Blob.

Once you download and run the command from an elevated Azure PowerShell session, it will mount, validate, and fixup the image and then proceed to thoroughly check the integrity and then finally uploading to Azure.

While the image is uploading, the status will remain “Upload pending.”

Once the upload is complete, you can then apply the template image to a collection.

Once the image is associated with a collection, the provisioning will begin. This may take a while. It will show a status of “Provisioning” until it is finished fully prepping the image and parsing for applications.

Once the applications become available in the “Publish RemoteApp Programs” screen, you will see that the AppV programs will show alongside the native applications. These application were queried upon the provisioning that occurred after the collection was created. The AppV applications will be the ones originating from the AppV Client’s PackageInstallationRoot (which by default is C:ProgramDataAppV.) Once the applications have been published and user access has been configured, you can then download the Azure RemoteApp RDP client from:

https://www.remoteapp.windowsazure.com/

Once you download the ClickOnce application, you will be prompted with a wizard upon first launch:

The first item you will need to do is supply the appropriate credentials. You will need to supply a corporate account or an MSA.

After you have been authenticated, you will see your published applications (both native and virtual applications) assigned and published to the user. You can then begin to test virtual application behavior in Azure RemoteApp.

App-V 5: On Java Virtualization Strategies


Throughout the past 15 years, from its origins in Softricity, one of App-V’s primary use cases has been addressing complex version-dependent Java run-time ecosystems. The “Application-to-Application Isolation model” of App-V – particularly using JRE runtimes as a test case – proved much success for those applications and enterprise websites that were married to a specific runtime – and needed to be used by the same user and/or multiple users on the same machine. As Softgrid became App-V, the client engine developed more and more methods of further, optional integration into the operating system via advanced extension points as well as dynamic virtualization (or just-in-time virtualization.)

Fast-forward to today: While the many of the old traditional issues that came with DLL Hell (such as DLL stomping) were rectified via registry-free assemblies and WinSxS, managing multiple JRE runtimes still requires intervention – especially when deployed to pooled session and virtual desktop environments (i.e. Citrix XenApp, MS RDSH/RDVH, etc.) As “JAR hell” as it is often called – appears to be here to stay for a while, JRE isolation is still one of the top use cases for App-V as a result.

Historical Strategies

In the world of Softgrid up until Softgrid 4.1, the strategy choices were simple:

  • Single JRE (Virtualize None): The most desired scenario. This simplified deployments and allowed for JRE to be included in base operating system deployment images.
  • Virtualize All JRE’s: No native JRE images in the base image. All versions are isolated using App-V.
  • Virtualize all but One JRE: In this scenario

In addition, the versions of Java had to be sequenced within the same virtual environment as the parent application. This would eventually start to change with App-V 4.5. In that particular release, DSC (Dynamic Suite Composition) was introduced allowing applications dependent upon Java to be sequenced separately from Java and linked together.

Methods

With the release of App-V 5 and its subsequent iterations, the options for Java have become more flexible.  However, since the primary reason for virtualizing Java is to be able to deploy multiple versions of the run-time module to same virtual or physical machine, all options for virtualizing Java are not necessarily on the table. Each option must be assessed on its own merit. The potential strategies for Java are as follows:

Packaged with Application or Browser

This is where the specific JRE middleware is installed alongside an application within the same App-V package. Not a very common solution as it requires the master application to be updated as the runtime needs to be updated. Because of the many issues that come with this, Dynamic Suite Composition was introduced in version 4.5. This was later improved with Connection Groups in V5.

Connection Groups and Challenges

Connection Groups are where two or more applications are sequenced separately and brought into the same virtual environment (essentially a meta-bubble.) This was introduced first in App-V 5 and then drastically improved for 5.0 SP3. This allows for the capability of updating applications and pre-requisite JRE packages independently. Connection Groups for Java run-times can be challenging – especially on RDS systems where many different users are running multiple applications dependent upon the same version of Java. Once a Java Package was initialized, it can only run within one Connection Group at a time. This requires proper planning and potential silo-ing for RDS scenarios.

RunVirtual

This is where a designated native application is linked to a virtual environment. RunVirtual (in its many forms) tells a native application to run within the virtual environment of the assigned application (as well as its connection group if the application belongs to one.) RunVirtual is a great solution for those natively installed applications to take advantage of interoperability with a virtual application. The ways you can configure a native application to “Run Virtually” are as follows:

  • The RunVirtual Registry Key: This works great as it is tied to the processes’ executable name. Can be configured per-machine or per-user starting with App-V 5.0 SP3.
  • Configured Package Shortcut: This is a good solution as it travels with the package.
  • Out-of-Band RunVirtual: Where a Shortcut or Command Line contains the /AppVVE or /AppVPID switch or uses PowerShell to run a native process within the virtual environment of specific package.

All of the possible options for launching a native process into a virtual package’s environment (bubble) are found here: http://blogs.technet.com/b/gladiatormsft/archive/2013/04/24/app-v-5-0-launching-native-local-processes-within-the-virtual-environment.aspx

Internet Explorer – A Worthy Separate Discussion

Internet Explorer warrants its own discussion primarily for two reasons:

  • Internet Explorer cannot be packaged and isolated from the native operating system.
  • Internet Explorer, like Explorer, is eligible for supporting primary and secondary virtualization environments through dynamic virtualization.

For those reasons, I segment my Internet Explorer and Java discussions from all other applications when discussing application virtualization strategies with customers.

Internet Explorer, Java, and RunVirtual

Configuring RunVirtual to bring the local Internet Explorer into the Java packages’ virtual environment is a simple way to allow for interoperability – but it can lead to its own issues:

  • RunVirtual via Registry Key: Whether it is per-user or per-machine – this methodology forces IE to only interact with one Java package (or else yield potential issues with RunVirtual collisions. Use this solution if only one Java package will be needed virtually with Internet Explorer for the user (or the machine if configured for the machine.)
  • RunVirtual using command line switches (AppVVE, etc.): This requires a lot of out-of-band shortcut management – but it does give flexibility so long as all other instances of Internet Explorer are configured for RunVirtual in either this manner or though packaged shortcuts.
  • Packaged Shortcuts: Using shortcuts to the local Internet Explorer – either captured via sequencing into the package manifest or configured via dynamic configuration. This method will create a special shortcut that essentially runs virtual for the native Internet Explorer. It also travels with the package and as long as the naming is unique, it will not create two much confusion although it does mean that Internet Explorer must be launched using this specific shortcut to ensure it runs within the specified virtual packages.

When you weigh out the “perceived” complicated options for bringing IE into an App-V Java package by Pros and Cons, you can simplify it using the table below:

IE Options

Pros

Cons

RunVirtual through Registry Key (Global)

Simple to Deploy.

Does not Travel with package. One Java per IE per Machine.

RunVirtual through Registry Key (User)

Simple to Deploy.

Does not Travel with package. One Java per IE per User

Packaged Shortcut

Travels with package. Allows for multiple Java packages.

Creates Multiple Internet Explorer Shortcuts.

Out-of-Band RunVirtual (/AppVVE, etc.)

Allows for Multiple Java Packages.

Does not Travel with package. Creates Multiple Internet Explorer Shortcuts.

 

Connection Group with EBIS (Empty Bubble w/ IE Shortcut)

This is where Internet Explorer is treated as a separate package though the creation of an “empty” virtual package containing only an Internet Explorer shortcut. That empty package is then linked to a virtual Java package using Connection Groups. If you want to use Connection Groups to link Internet Explorer with virtual Java packages instead of RunVirtual solutions, this may be the better solution – especially if you will be running both native and virtual Java on the same machine or device.

IE Native w/ JITV of Plug-In – Dynamic Virtualization Only

I have been starting to see this on App-V test matrices and I am a little bit concerned as it adds unnecessary testing variables that can further delay a package’s movement through common UAT (User Acceptance Testing) scenarios. That is not the case.

Dynamic Virtualization (also referred to as JITV – or Just-in-Time Virtualization) allows for virtualization of shell extensions, browser plug-ins, and ActiveX controls for a virtual package within the native processes that are hosting the COM objects. They key item being COM OBJECTS. They all need dynamic virtualization of COM in-process objects. There are some exceptions to some browser plugins that only use HTML scripts. They use an object model completely separate from COM. Not all browser plugins require COM in-proc virtualization. Do you get where I am going here?

Adding One Final (yet significant Variable) – the Legacy 4.6 Client

Running virtual packages containing Internet Explorer in 5.0 side-by-side with Legacy 4.6 packages running Internet Explorer running side-by-side with the App-V 5 client is supported. They did, however, had some initial issues when ran with Internet Explorer 10 and 11 due to issues with Enhanced Protection Mode and some double-hooking issues that were rectified by Hotfix Package 1 for 4.6 Service Pack 3 (https://support.microsoft.com/en-us/kb/2897394.)

 

On App-V with Azure: Streaming Applications from the Cloud

April 17, 2015 1 comment

In App-V in general, the Content Store (also referred to as the package source or streaming source) is the most critical in both traditional streaming (stream-to-disk) scenarios and Shared Content Store mode clients (stream-to-memory.) Traditionally, Microsoft recommends placing Content Stores as close as possible to end user devices when possible leveraging on-premise technologies such as DFS-R to for replication and location. But what about those customers who are looking to leverage cloud services for App-V content for either:

  • Disaster Recovery/Business Continuity solutions

  • Internet-Facing Scenarios

  • Part of an overall strategy to migrate from on-premises resources to hosted cloud services.

When looking to deploy Content Servers in Azure for application streaming, it is important to plan for regional proximity with a mechanism for replicating uniform copies of the App-V content just as you would have done in an on-premises environment.

Why Azure Web Roles can work for App-V Streaming

The App-V Content Server in the cloud is simply a hosted web server virtual machine with attached storage configuration and a corresponding set of cloud services configured to allow downloading of APPV package content via HTTP or HTTPs.  This package source requires no additional management (other than security and MIME configuration for .APPV files) of the static package content and is simple to deploy and scale out as needed.

Cloud Services and Endpoints

Assuming you have established an Azure subscription, setting up the necessary services is essential however, a lot of the minor configuration will vary depending on how these cloud resources are integrated within your existing App-V infrastructure. For the sake of example, I will use the scenario of deploying a Content Server to the cloud for the purposes of providing cloud-based content.

In most cases, the order will be to:

  • Create the Cloud Service – to allow access to hosted Content VM's over the Internet

  • Create the Storage Account to store the VHDs.

If you want to learn more about Storage Accounts, the reference “What is a Storage Account?” http://azure.microsoft.com/en-us/documentation/articles/storage-whatis-account/ is a good start especially when understanding storage redundancy options.

  • Create the Virtual Networks

In addition, you will be leveraging external-facing Virtual IP’s (Public IP) an internal DIP, and an Azure Traffic Manager resource

Why do I need a Cloud Service, Virtual Network, VIP and DIP?

If you want to learn more about Cloud Service, Virtual Network, VIPs and DIPs, I highly recommend Young Chou’s (My buddy in DPE from Charlotte, NC) article on Windows Azure Infrastructure Services IP Address Management – at: http://blogs.technet.com/b/yungchou/archive/2014/03/17/windows_2d00_azure_2d00_infrastructure_2d00_services_2d00_ip_2d00_address_2d00_management_2d00_part_2d00_1_2d00_of_2d00_2.aspx

In addition, the following tutorials can walk you through the process:

VM Creation and Sizing

Content Servers in Azure can be any operating system supported for web services. In the case of Azure, it will be Windows Server 2008 R2, 2012, and 2012 R2 SKUs.

For Virtual Machine sizing purposes, it is recommended to align and plan capacity for Azure VM’s using the same guidelines for on-premises using the official App-V Sizing document: https://technet.microsoft.com/en-us/library/dn595131.aspx

I have found in my early testing with customers and myself, it is economical to scale out Standard Tiers using A1 or A2 series VM’s and load-balance as needed since we are only serving up web content essentially. I’ll also explain another reason when diving into the streaming protocol selection.


Internet Facing Scenarios

For App-V client retrieving content from cloud-based servers, there are three important factors to consider:

  • Streaming/Performance

  • Streaming/Bandwidth Costs

  • Security

Streaming

For Azure Web Services, streaming APPV package content from the cloud is quicker using HTTP although the tradeoff of non-secure transmission may not meet all security requirements of some organizations. For those organizations, additional security of the cloud services for HTTPS communications will be required. Also you will need to flip the App-V clients to use single-range HTTP communication as opposed to multi-range.

BranchCache is Your Friend

To ensure fast, optimal delivery for on-premises App-V clients, and to provide the best experience possible for devices that may use the stream-to-disk scenario with clients – it is recommended to have the clients configured for BranchCache in either hosted mode or distributed mode. In addition, it is NOT recommend the use of Shared Content Store mode for on-premise clients due to limitations of offline access and heavy latency with the single-range HTTP protocol. Potential latency that may come with Single-range protocols would be offset and optimized by use of the BranchCache protocol. In addition, BrancheCache can reduce traffic overall to the cloud.

Security

In addition to security content transmission, you will want to secure access between your on-premises clients and the Azure-hosted cloud services. If the on-premises domain for which the App-V Client’s belong is federated with an Azure AD domain, you can secure access through individual users. Otherwise, you will need to leverage an alternative solution for restricting access.

Whitelisting IP Address Access

You can restrict access by IP address range in at least two ways. You can leverage the existing IP and Domain Restrictions feature in IIS. This will also work to secure Azure App-V Content servers to only allow access to IP addresses and domains that you have specified in a whitelist. https://technet.microsoft.com/en-us/library/cc731598%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396

You can also secure access to the cloud endpoints using ACLS.  http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-set-up-endpoints/

Regardless of how the web service is secured on the back end. For streaming seamlessly, it is also recommended to add the URLs of the resources to the App-V Client’s Intranet Zone policy.

App-V 5: On the App-V 5 Virtual Registry

April 16, 2015 1 comment

I have been meaning to follow-up my previous discussions of the App-V registry staging subsystem with an article on the virtual registry in App-V 5. I will admit I am a little late to the follow up, however, as they say “better late than never.” In previous article I discussed registry staging and its effect on publishing and streaming. Now the discussion continues with how the virtual registry handles operations once the virtual registry has been staged.

The Virtual Registry in App-V was implemented in a much more streamlined way in version 5 than in previous versions. First of all, real hive files are packaged with the APPV package format. In addition, the real registry is used where the actual locations are state-separated while the Virtual Registry component provides the correct redirection and COW (copy-on-write) operations merging 3 elements into a single registry view:

  • The native registry

  • The immutable staged package registry (built from the registry.dat file within the package)

  • The user and machine COW registry

Like with file assets, the view of each merged registry is done for each package and package group (virtual environment)

Functionality

At runtime, registry operations are hooked, and special sauce is made to ensure the redirection, registry merging, and copy-on-write operations of changes are done.

Registry reads are done in an ordered approach by layer and you can easily confirm this with Process Monitor. The order is:

  • The COW registry is read first

  • Followed by the Package Registry (constructed from REGISTRY.DAT)

  • Finally the native registry for the requested location.

For registry writes, things are much simpler. Registry writes always go to the COW location corresponding to the original key that was opened. Registry data that is written to the user’s roaming registry is tokenized so that it is portable across machine boundaries.

Bear in mind, this is based upon the predication that the registry location is viewed as virtual (opaque) to begin with and was not excluded or configured to be translucent in the sequencer. There are special pointers contained in the registry that will govern opacity that I will mention later.

 

Registry COW Locations

The Virtual Registry will manage and track all COW locations for registry storage. The locations will vary depending on security context and type.

Roaming User Registry COW data will go here:

  • HKCUSoftwareMicrosoftAppVClientPackages<GUID>REGISTRY

  • HKCUSoftwareMicrosoftAppVClientPackageGroups<GUID>REGISTRY

Non Roaming User Registry and non-elevated Machine Registry COW data will go here:

  • HKCUSoftwareClassesAppVClientPackages<GUID>REGISTRY

  • HKCUSoftwareClassesAppVClientPackageGroups<GUID>REGISTRY

For Machine registry data coming from elevated processes, the Registry COW data will go here:

  • HKLMSoftwareMicrosoftAppVClientPackages<GUID>

  • HKLMSoftwareMicrosoftAppVClientPackageGroups<GUID>

 

Special Registry Key and Value Metadata

You may have noticed that for some keys, you will also see some additional data:

This data is not on all keys but when it is, it is reflective of the specific state of the key when responding to registry operations. Particularly, whether the key was previous deleted or if it was configured to be opaque and not merged with the other registry layers.

  • 0x00010000: Key Deleted

  • 0x00020000: Value Deleted

  • 0x00040000: Key Opaque

If the value above also contains a 1 at the end of the type, this means there are sub keys present stored within the value data.

Purging the COW

The Registry COW data is purged along with the rest of the user state when the package has been deleted or when the package is repaired with the –userstate switch.

Categories: Uncategorized Tags: , , ,