Virtualisation

WEM 1808 UPDATE AVAILABLE

Wag the real - Alain Assaf blog - Fri, 10/12/2018 - 12:20
Intro Fall, my favorite time of year. More so now that Citrix has released the next version of WEM. The version numbering system in now in line with other newly released Citrix products. This version is 1808. You can now download the new version here (requires Platinum licenses and login to Citrix.com). I’ve provided the […]
Categories: , Citrix, Virtualisation

Standard Notes: a Note-Taking App with Client-Side Encryption

Helge Klein - Thu, 10/11/2018 - 00:26

Note-taking is one of those topics that appear to have been solved long ago, yet offer plenty of opportunity for new contestants. There are multiple reasons for that, but for me the number one is encryption. The major players, Microsoft OneNote, Evernote, and Google Keep, store your thoughts and ideas in plain text on any synchronized device as well as their cloud servers. In the age of hacks, leaks, and exploits it does not take much to conjure up scenarios where those thoughts become public knowledge all of a sudden.

If that creates an uncomfortable feeling: you are not alone. Luckily, others had it before. And at least one of them was a talented software developer. Thus came about Standard Notes. The following is a summary of my experiences with it. But let’s start with a quick recap.

Why Look for an Alternative Note-Taking App? Client-Side, Zero-Knowledge Encryption

I already mentioned this, but it cannot be stressed enough. Client-side means that the encryption happens on the client – unencrypted plain-text never leaves your device. Zero-knowledge means that the app vendor does not have access to the encryption key.

What Else is Missing from OneNote, Evernote and Keep?

Microsoft OneNote used to be a full-fledged member of the Office suite. That stopped with OneNote 2016, which is the last version of the full-capability desktop app Microsoft is going to release. Going forward, only the UWP app available on the Windows Store remains.

Evernote I have never used personally, so I cannot say much about it.

Google Keep is a little too simplistic for more than the simplest of requirements.

Important Standard Notes Features

This is a quick summary of what I believe are the most important features of Standard Notes:

  • Client-side encryption
  • Automatic (encrypted) backups, e.g. to Google Drive. A JavaScript-based decryption tool is saved along with the backup so the content is still accessible even if the vendor has gone out of business. This requires the Extended subscription.
  • 2FA: two-factor authentication via TOTP (e.g. Google Authenticator, Authy). This requires the Extended subscription.
  • Tagging system similar to the labels in Gmail. This is much more efficient for organizing (and finding) notes than a system where a note is stored in a folder hierarchy.
  • Built for longevity
  • Clients are available for all relevant platforms: Windows, macOS, Linux, Web, Android, iOS
What’s Missing from Standard Notes

Currently there are no keyboard shortcuts for navigating the app (the editors do have keyboard shortcuts). Support is planned, though.

Usage Tips for Standard Notes Choose Your Editor Wisely

The free version comes with a plain text editor only. Rich text and Markdown editors are reserved for the Extended subscription.

Once on the Extended subscription, there are several Markdown editors to choose from. The Advanced Markdown Editor partially formats Markdown code in-place and provides easy access to formatting options by way of an icon bar and keyboard shortcuts. Alternatively, if you want to be able to switch between plain Markdown code and a preview of the formatted result try to the Simple Markdown Editor.

The rich text Plus Editor works similar to Gmail. When used in the web app version of Standard Notes, it overrides Chrome’s CTRL+number keyboard shortcuts, though, which can be irritating (instead of navigation to the first tab CTRL+1 now formats the current line as a headline). This is not an issue with the standalone app, of course.

You may want to select your preferred type of editor early on: even though switching between editors is easy, the formatting is not converted between Markdown and rich text.

Useful Extensions

The Folders extension lets you nest tags (create a hierarchy of tags as you can do with Gmail labels). the Quick Tags extension lets you choose from existing tags when adding tags to a note.

Smart Tags

Smart tags create virtual tags that display notes matching certain criteria (documentation). This requires the Folders extension. The following example creates a tag that only lists notes without associated tags:

`!["Untagged", "tags.length", "=", 0]` Sharing Notes Privately

Standard Notes comes with Listed, a publishing service. After installing Listed you will find the menu item Publish to Private Link in the Actions menu. Private publishing basically decrypts your note, uploads it to Listed and publishes it under a cryptic link like https://listed.standardnotes.org/6tGUB6WrPw. It is important to note that private publishing does not give others access to your note directly. Instead, it makes a read-only copy of your note. Therefore, private publishing can be used to share information with anyone who has the link, but it does not allow for collaboration.

Importing to and Exporting from Standard Notes Exporting a Markdown Note as HTML

To export a note formatted in Markdown as HTML (e.g. for reuse in a blog post) switch to the Fancy Markdown editor. It lets you switch to HTML next to the Preview button in the upper right corner.

Exporting a Rich Text Note as HTML

To export a note formatted in rich text as HTML (e.g. for reuse in a blog post), click the code view button in the Plus Editor’s toolbar. You may want to beautify the resulting HTML code with one of the many online HTML beautifiers out there.

Importing Notes from OneNote

If you have been using OneNote before, like me, you would probably like to transfer your existing data over, preferably without losing too much of the formatting.

The bad news: OneNote is not very good at exporting to formats that are useful for migrating to other services. The other bad news: nobody seems to have filled that gap with a script or tool.

Here are some ways I found to get existing content from OneNote to Standard Notes.

OneNote to Standard Notes HTML

This is easy. The downside is that you have to migrate each page individually. Instructions:

  • In OneNote press CTRL+A followed by CTRL+C to copy all the page contents
  • Navigate to Standard Notes
  • Switch the editor to Plus Editor
  • Press CTRL+V to paste the copied page

In my tests, this resulted in a pretty faithful representation of the original OneNote page. Of course, you are limited to page contents Standard Notes supports.

OneNote to Standard Notes Markdown

The following only brings a rough approximation of the original formatting across from OneNote to Standard Notes. If you have a better workflow please let us know in a comment.

  • In OneNote press CTRL+A followed by CTRL+C to copy all the page contents
  • Open Notepad++
  • Select Edit -> Paste Special -> Paste HTML Content
  • Select the text within the body tag and press CTRL+C to copy
  • Navigate to Standard Notes
  • Switch the editor to any markdown editor
  • Press CTRL+V to paste the copied page
The Standard Notes Android App

Standard Notes’ Android app is superbly rated. It is basically the same as the web app with UI adjustments to the smaller mobile device. Everything is synchronized very quickly, including the choice of editor. This makes it impossible to switch to an editor with good preview on mobile while leaving your PC’s app settings unaffectecd.

One thing I do not quite “get” is navigation. When I click a note it opens the note’s configured editor. When I press the Back button, however, I get a Compose screen and need to press Back a second time before I am really back in the list of notes.

Standard Notes Online Demo

A fully-featured demo of Standard Notes is available online.

The post Standard Notes: a Note-Taking App with Client-Side Encryption appeared first on Helge Klein.

Dynamic Software Update Rings in Microsoft Intune

Aaron Parker's stealthpuppy - Wed, 10/10/2018 - 02:27

Microsoft Intune provides management of Window 10 Update Rings to enable Windows as a Service, via the Software Updates feature. This enrols a Windows PC into Windows Update for Business to manage feature and quality updates the device receives and how quickly it updates to a new release. As you scale the number of devices managed by Microsoft Intune, the need to manage the software update or deployment rings is key to adopting Windows 10 successfully. Being able to do so dynamically and empowering end-users by involving them in the process sounds like an idea that’s just crazy enough to work. This article details an approach to achieve dynamic software update rings.

Dynamic Groups 

Azure AD Premium includes Dynamic Device and User groups whose membership can change, well dynamically. This feature enables us to apply software update rings to dynamic groups where the membership can be based on just about any user or device property that suits our needs.

In most cases, applying Windows 10 Update Rings to devices, rather than users, is the best approach to ensure that updates can be better tracked across specific hardware and software combinations. I don’t necessarily want a user moving between PCs and have devices move back and forth between update rings. Basing update rings on dynamic device groups is then likely the better approach.

Software Update Rings

For the purposes of illustration, I’ve created a basic approach to update rings with the 3 rings show here:

  • Semi-Annual Channel – we need a catch-all ring applied to All Devices. If our dynamic groups that are based on a device property don’t catch a device, it won’t get the correct update ring applied. This approach ensures that by default, a device is treated as generally production ready be being enrolled in the Semi-Annual Channel to receive well tested updates. This ring is assigned to All Devices, while excluding Azure AD dynamic groups assigned to all other rings
  • Semi-Annual Channel (Targeted) – here devices are enrolled for a pilot ring so that the latest Windows 10 release can be tested before rolling out the majority of PCs. This ring applies to a specific Azure AD dynamic group
  • Windows Insider – to preview upcoming Windows 10 releases it’s important to be enrolled in the Windows Insider program. This ring applies to a specific Azure AD dynamic group

My update rings in this example are quite simple, but the approach can be customised for specific environments and needs.

Update Rings configured within Intune Software Updates

Assigning Devices

To assign a device to an update ring, we need to leverage a device property that can be dynamically set. Here, Device Category fits this bill in a number of ways – here, the administrator can view the device category and therefore the device’s update ring, by viewing the device properties in the Intune console. If device category is not set (it will be set to Unassigned), our catch-all update ring will ensure the device is set to a production ready state.

Device properties in Intune

The device category can also be viewed in the Intune Company Portal, thus making it easy to view this property from multiple locations. This visibility makes device category a good choice for managing our update rings.

Device properties in the Intune Company Portal

The Intune Administrator creates device categories in the console. As you can see in the image below, I’ve chosen Production, Pilot and Preview as the device categories that provide, hopefully, clear indication as to what each category is for.

Intune Device categories

Here’s where the choice of using Device Category for assigning update rings is possibly a bit out there – the end-user chooses the device category! When enrolling their device or launching the Intune Company Portal for the first time they see the device category choices:

Setting a device category in the Intune Company Portal

There’s no replacement for end-user education, so it would behoove an organisation to include instructions on which category to choose, but in my mind it’s obvious that most users should choose Production. Having device category descriptions displayed as well would help, but they don’t at this time. Device categories are only shown once and the user cannot change the category after enrolment. Bulk changes to or reporting on categories can be achieved using the new Intune PowerShell SDK.

Dynamic Software Update Rings

Now that we have Update rings in place and an approach assigning them via Dynamic Device groups in Azure AD, we can create those groups based on membership rules that query Device Category. I’ve created two groups – Devices-Pilot and Devices-Preview that use a query where deviceCategory equals Pilot or Preview respectively. A Devices-Production group can also be created, but isn’t required because the production update ring applies to All Devices. A production devices group would assist with reporting.

Dynamic group membership rules

For these devices groups, the membership rules are:

  • Devices-Production: (device.deviceCategory -eq "Production") -or (device.deviceCategory -eq "Unknown") 
  • Devices-Pilot: (device.deviceCategory -eq "Pilot") 
  • Devices-Preview: (device.deviceCategory -eq "Preview") 

We can take this a step further and account for corporate vs. personal devices. Where users can enrol personal devices and you would prefer not to deploy Software update policies to them, membership can be filtered further. Using an advanced membership rule, update the group membership with:

  • Devices-Production: ((device.deviceCategory -eq "Production") -or (device.deviceCategory -eq "Unknown")) -and (device.deviceOwnership -eq "Company") 
  • Devices-Pilot: (device.deviceCategory -eq "Pilot") -and (device.deviceOwnership -eq "Company") 
  • Devices-Preview: (device.deviceCategory -eq "Preview") -and (device.deviceOwnership -eq "Company") 

With these groups created, assignments for my Software update rings are:

  • Semi-Annual Channel – assign to All Devices and exclude Devices-Pilot and Devices-Preview. 
  • Semi-Annual Channel (Targeted) – assign to Devices-Pilot
  • Windows Insider – assign to Devices-Preview

When a category is assigned to a device, the dynamic group will update at some point and the policy will apply on a subsequent device policy refresh.

Dynamic Software Updates

The same approach can be used for deploying applications that provide preview channels similar to Windows. Microsoft Office 365 ProPlus is an obvious choice – we can create Office application deployments using Update Channels with assignments using our Dynamic Device groups.

Office 365 ProPlus apps in Intune to manage update channels

The update rings I’ve implemented in my test environment include:

  • Office 365 ProPlus Semi-Annual Channel or Semi-Annual Channel (Targeted) that is assigned to All Devices and excludes Devices-Pilot and Devices-Preview, we have a catch all Office deployment package that will go out to the majority of devices
  • Office 365 ProPlus Semi-Annual Channel (Targeted) or Monthly Channel assigned to the Devices-Pilot group to receive the latest updates
  • Office 365 ProPlus Monthly Channel (Targeted) assigned to the Devices-Preview group to test Office Insider updates for testing upcoming features

Office 365 ProPlus then updates itself on the end-device based on the assigned channel. This actually works quite well for this application as you can pretty seamlessly move between channels as required.

Wrapping Up

In this article, I’ve shown you how to enable dynamic Software Update rings for Windows Office in Intune using Azure AD Device Dynamic groups. This uses what may be a controversial approach – devices category chosen by the end-user. Modern device management forces us to rethink our engagement with end-users and involving them more directly in the testing process can help make IT more personal.

For more controlled environments, the choice of category can be overwritten by the administrator, especially for users who may need to roll back to a more stable release.

Photo by Mathew Schwartz on Unsplash

This article by Aaron Parker, Dynamic Software Update Rings in Microsoft Intune appeared first on Aaron Parker.

Categories: Community, Virtualisation

October 2018 Windows Update Pulled After Deleting End User Files

Theresa Miller - Tue, 10/09/2018 - 05:30

As IT professionals, we often have backups of all of our critical systems. Whether it be a file server, an application server, or even a Microsoft Exchange server, we always make sure we back these servers up, and test that our backups are valid. Because we are so used to this as part of our […]

The post October 2018 Windows Update Pulled After Deleting End User Files appeared first on 24x7ITConnection.

vMotion for vGPUs

Theresa Miller - Tue, 10/02/2018 - 16:48

The introduction of vMotion for vGPUs was one of the exciting vSphere features announced at VMworld US this year. The announcements included a new vSphere edition (vSphere Platinum) and version (vSphere 6.7 Update 1). Security is the key feature of vSphere Platinum. It provides security at the hypervisor level with encryption in flight and at rest, TPM 2.0 (including virtual TPM 2.0). Access security […]

The post vMotion for vGPUs appeared first on 24x7ITConnection.

Visualising ConfigMgr, Intune and Windows 10 Releases

Aaron Parker's stealthpuppy - Wed, 09/26/2018 - 15:11

I recently presented a session titled ‘Modern Management Methodology Imaginarium‘ at the xenappblog.com Virtual Expo September 2018 event. In this session, I discussed my thoughts and approach to modern management, primarily for Windows 10. The session provided a bit of background, some definitions for what makes up the modern desktop and a high-level approach to implementing it.

The Modern Desktop

While the ‘modern desktop’ is most certainly a popular topic in the EUC space today, how to implement a modern desktop approach I think, is not yet widely understood. Organisations are looking to solve the same desktop challenges we’ve had for the past 20 years, in a more efficient and secure manner. Implementing the modern desktop requires defining a methodology that follows the same basic process followed for any desktop project – discovery and assessment, design, build, test, pilot, deploy (rinse and repeat). 

Successfully adopting the modern desktop requires leveraging analytics which is easier to achieve with current cloud-based toolsets (Microsoft has essentially made this free). Whilst analytics show you where you are, it’s important to understand where you need to get to, or at least what the journey will look like.

Faster Release Schedules

Software vendors have changed their approach to releases and more regular smaller releases are common. I posit that the effect of this on our methodology or approach is seen primarily in the design phase – a design document can be out of date a week after you’ve written it. Thus we should ensure that we document design principles and business outcomes rather than get bogged down in the details.

In my session, I demonstrated this with current Microsoft products – System Center Configuration Manager, Microsoft Intune and, of course, Windows 10 itself. The pace of releases has increased, which while great for innovation, can out pressure IT groups implementing and managing these products. Microsoft Intune has weekly updates!

Here’s the slide I created to visualise this theme.

Visualising ConfigMgr, Intune and Windows 10 Releases

Download the Slide

A number of people have asked about using the slide, so I’m making it available here for download to use in your own presentations. Download here in PowerPoint format: Visualising ConfigMgr, Intune and Windows 10 Releases.

Note that this is covered under the same license as all content on this site – a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. If you use the slide, please keep the attribution intact. I welcome any updates or improvements you might have.

View the Session

Eric should making the recordings from last week’s Virtual Expo available soon, so you should be able to see my session in full.

Photo by Alex Litvin on Unsplash

 

This article by Aaron Parker, Visualising ConfigMgr, Intune and Windows 10 Releases appeared first on Aaron Parker.

Categories: Community, Virtualisation

Diagnose any kind of problem in your Virtualized Environment with Goliath Technologies!

Theresa Miller - Tue, 09/25/2018 - 13:15

Virtualized environments are complex with many moving parts that include storage, networking, server hardware, hypervisors, and more. If you have ever dealt with an IT technical problem that became a major incident, then you know how complex troubleshooting can become. Then consider a complex problem that isn’t what it seems; where at first glance the […]

The post Diagnose any kind of problem in your Virtualized Environment with Goliath Technologies! appeared first on 24x7ITConnection.

MFA with Microsoft isn’t Scary

Theresa Miller - Thu, 09/20/2018 - 05:30

Multi-factor authentication (MFA) and the eventual abandoning of password based authentication is just around the corner. Of course MFA is available on many services right now, but saying goodbye to passwords is still a work in progress. The state of MFA with Microsoft isn’t scary at all, and it could be time to dip your […]

The post MFA with Microsoft isn’t Scary appeared first on 24x7ITConnection.

Who Said On-Premises Email Was Dead, Look Out Exchange Server 2019 is Here!

Theresa Miller - Tue, 09/18/2018 - 11:23

Well if you haven’t heard Exchange Server 2019 is now in public preview. During Microsoft Ignite 2017 it was announced that Exchange Server 2019 would be coming out in 2018. This announcement put away fears that Exchange Server 2016 would be the last on-premises version. Microsoft came through and released the public preview of Exchange […]

The post Who Said On-Premises Email Was Dead, Look Out Exchange Server 2019 is Here! appeared first on 24x7ITConnection.

Thunderbolt end-user experience macOS vs. Windows

Aaron Parker's stealthpuppy - Fri, 09/14/2018 - 10:12

Thunderbolt 3 (and USB-C) are here to provide a single cable for everything, although your experience with this technology will differ depending on your choice of operating system. Here’s a quick look at the end-user experience of TB on macOS and Windows.

Thunderbolt 3 on macOS

Thunderbolt on macOS just works – plug-in a TB device and off you go. This makes sense given that the standard was designed by Intel and Apple. Unpacking and plugging in a Thunderbolt dock with external displays, ethernet, audio etc., on macOS in just about every case will work without installing drivers.

Thunderbolt ports on the MacBook Pro

Here’s Apple’s dirty (not so) secret though – excluding the MacBook Air (and the Mini that comes with TB2), all current Macs have TB3 ports, except for the MacBook. It has a single USB-C port only. Maybe that’s OK – the TB target market is likely to be purchasing the Pro line anyway, but Apple isn’t a fan of labelling their ports, so caveat emptor.

macOS provides a good look at the devices plugged into your TB ports:

macOS System Report showing Thunderbolt devices

Note that while the MacBook Pro with Touch Bar has 4 Thunderbolt 3 ports, these are divided across 2 busses. If you have more than one device plugged in, ensure they’re plugged into either side of the laptop for best performance.

Thunderbolt 3 on Windows

Thunderbolt 3 on Windows 10? That is unfortunately not so straight-forward. 

I’ve been testing connection to my dock on an HP Elitebook x360 G2 that comes equipped with 2 x TB3 ports. The default Windows 10 image for this machine is an absolute mess that has a whole lot of software that isn’t required. Resetting the machine back to defaults strips it right back to the bare essentials, excluding the Thunderbolt driver and software. After plugging in a TB device, it isn’t recognised and no driver or software is downloaded from Windows Update. Interestingly, no driver or software was offered by the HP Support Assistant app designed to help end-users keep their HP PCs up to date.

Windows PCs equipped with Thunderbolt ports will have the driver and software installed by default, so typically this won’t be an issue; however, if you’re resetting the PC or creating a corporate image, you’ll need to install that software. Every OEM should supply Thunderbolt software for download, which for HP PCs is listed as Intel Thunderbolt 3 Secure Connect. The software is actually provided by Intel and available in various downloads on their site.

With the software installed and a device plugged in, the user sees a message box asking to approve the connection to a Thunderbolt device. Management actions such as approving or removing a device requires administrator rights on the PC. Pluggable has a good article on the entire user experience and troubleshooting.

Approving connection to TB devices on Windows 10

Once approved, the device can then be viewed and managed. 

Viewing attached TB devices on Windows 10

Of course, once plugged in, Windows sees the peripherals and connects to them as usual.

Peripherals plugged into a TB dock on Windows 10

Thunderbolt on Windows isn’t as simple as it could be. It would be great to see drivers installed directly from Windows Update instead of being available separately, but once installed everything works as you would expect.

Wrap-up

Thunderbolt will unlikely see as wide spread adoption as USB 3.1, but users with specialised requirements such as video editors, CAD, etc., will benefit from the available bandwidth, which today is 40 Gbit/s vs. 10 Gbit/s. Early USB 3.2 hardware with 20 Gbit/s speeds has been demonstrated recently and this may further reduce the need for some users to go to devices providing the higher bandwidth.

The end-user experience of TB on macOS vs. Windows 10 is kind of disappointing – Windows requires that you install drivers and the software requires administrative rights. Not an ideal experience for home or SMB users and these requirements might preclude the usage of Thunderbolt in enterprise environments. However my own personal experience on a MacBook is pretty awesome – just plug in and go. Looks like I’ll be on macOS for the foreseeable future.

Update

Microsoft has an article on enabling Kernel DMA Protection for Thunderbolt 3. This requires Windows 10 1803 or above and must also be supported by the device drivers.

Photo by Linda Xu

This article by Aaron Parker, Thunderbolt end-user experience macOS vs. Windows appeared first on Aaron Parker.

Categories: Community, Virtualisation

Multi Cloud-Are we all talking about the same Multi Cloud?

Theresa Miller - Thu, 09/13/2018 - 05:30

The latest buzz word of the day is multi cloud and its usage with the enterprise. Lots of confusion and speculation but what does multi cloud really mean? Are we all talking about the same thing when we say Multi cloud? Because there are different cloud services offering types the meaning of multi cloud can […]

The post Multi Cloud-Are we all talking about the same Multi Cloud? appeared first on 24x7ITConnection.

Your VMworld US 2018 Recap, Announcements and Sessions

Theresa Miller - Tue, 09/11/2018 - 05:30

VMware took the stage once again in Las Vegas in August 2018 as another VMworld came and went which was loaded with announcements and content.  Lots of updates were shared for existing products as well as new products and even a brand new acquisition.  Not only were there lots of technical content and and update […]

The post Your VMworld US 2018 Recap, Announcements and Sessions appeared first on 24x7ITConnection.

Storage Sense on Windows 10 configured with Intune

Aaron Parker's stealthpuppy - Sun, 09/02/2018 - 10:46

In a modern management scenario, enabling end-points to perform automatic maintenance tasks will reduce TCO by avoiding scenarios that might result in support calls. Storage Sense in Windows 10 is a great way to manage free disk space on PCs by clearing caches, temporary files, old downloads, Windows Update cleanup, previous Windows Versions, and more, but it it’s not fully enabled by default. Storage Sense can potentially remove gigabytes of data, freeing up valuable space on smaller drives.

Here’s how to enable this feature on Windows 10 PCs enrolled in Microsoft Intune.

Storage Sense Settings

Storage Sense can be found in the Windows 10 Settings app and has only a few settings that can be changed. Typically a user may enable Storage Sense and accept the default settings and for most PCs, the defaults are likely good enough. Here’s what’s available in Windows 10 1803:

Enabling Storage Sense in Windows 10 Settings

Settings are stored in the user profile at:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\StorageSense\Parameters\StoragePolicy

 Settings are stored somewhat cryptically with numbers representing various options.

Storage Sense settings in the Registry

These values translate to following options and values in the table below:

SettingRegistry ValueOptionRegistry Data Storage Sense01Off0 On1 Run Storage Sense2048Every Day1 Every Week7 Every Month30 When Windows decides0 Delete temporary files that my apps aren't using04Selected0 Not selected1 Delete files in my recycle bin if they have been there for over08Off0 On1 256Never0 1 day1 14 days14 30 days30 60 days60 Delete files in my Downloads folder if they have been there for over32Off0 On1 512Never0 1 day1 14 days14 30 days30 60 days60

Now that we know what the options are, we can decide on what to deploy and deliver them to enrolled end-points.

Configure via PowerShell

Using the values from the table above, a PowerShell script can be deployed via Intune to configure our desired settings. The script below will enable Storage Sense along with several settings to regularly remove outdated or temporary files.

# Enable Storage Sense # Ensure the StorageSense key exists $key = "HKCU:\SOFTWARE\Microsoft\Windows\CurrentVersion\StorageSense" If (!(Test-Path "$key")) { New-Item -Path "$key" | Out-Null } If (!(Test-Path "$key\Parameters")) { New-Item -Path "$key\Parameters" | Out-Null } If (!(Test-Path "$key\Parameters\StoragePolicy")) { New-Item -Path "$key\Parameters\StoragePolicy" | Out-Null } # Set Storage Sense settings # Enable Storage Sense Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "01" -Type DWord -Value 1 # Set 'Run Storage Sense' to Every Week Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "2048" -Type DWord -Value 7 # Enable 'Delete temporary files that my apps aren't using' Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "04" -Type DWord -Value 1 # Set 'Delete files in my recycle bin if they have been there for over' to 14 days Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "08" -Type DWord -Value 1 Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "256" -Type DWord -Value 14 # Set 'Delete files in my Downloads folder if they have been there for over' to 60 days Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "32" -Type DWord -Value 1 Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "512" -Type DWord -Value 60 # Set value that Storage Sense has already notified the user Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "StoragePoliciesNotified" -Type DWord -Value 1

Modify the script as desired – at the very least the script should enable Storage Sense and leave the remaining settings as default. Save the script as a PowerShell file and deploy via the Intune console in the Azure portal. Ensure that the script runs with the logged on user’s credentials because it will write to HKCU.

Enabling Storage Sense with a PowerShell script in Intune

Assign the script to All Users and their PC will receive the script. It’s important to note that, because the settings are stored in HKCU and are not policies, the user can either disable Storage Sense or change other settings.

Wrapping Up

Storage Sense is a great feature to enable on Windows 10 PCs for both personal and corporate PCs. In a modern management scenario, it’s another tool in our kit for enabling end-points to be self-sufficient, so I highly recommend testing and enabling the feature by default. This article has shown you how to configure Storage Sense via Intune and PowerShell with all of the possible combinations required to configure it to suit your requirements.

Hold On…

Storage Sense shows you how much disk capacity has been cleaned in the previous month in the Settings app. For a bit of a laugh, you can modify the value where this is stored so that Settings displays spaced saved that’s clearly not genuine.

Messing around with the value of saved space

You’ll find the registry value (20180901) in this key:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\StorageSense\Parameters\StoragePolicy\SpaceHistory

Image Credit: Photo by Florian Pérennès on Unsplash

This article by Aaron Parker, Storage Sense on Windows 10 configured with Intune appeared first on Aaron Parker.

Categories: Community, Virtualisation

Review of Additive Manufacture and Generative Design for PLM/Design at Develop 3D Live 2018

Rachel Berrys Virtually Visual blog - Wed, 05/16/2018 - 13:54

A couple of months ago, back at D3DLive! I had the pleasure of chairing the Additive Manufacturing (AM) track. This event in my opinion alongside a few others e.g. Siggraph and COFES is one of the key technology and futures events for the CAD/Graphics ecosystem. This event is also free thanks in part to major sponsors HP, Intel, AMD and Dell sponsorship.

A few years ago, at such events the 3D-printing offerings were interesting, quirky but not really mainstream manufacturing or CAD. There were 3D-printing vendors and a few niche consultancies, but it certainly wasn’t technology making keynotes or mentioned by the CAD/design software giants. This year saw the second session of the day on the keynote stage (video here) featuring a generative design demo from Bradley Rothenberg of nTopology.

With a full track dedicated to Additive Manufacture(AM) this year including the large mainstream CAD software vendors such as Dassault, Siemens PLM and Autodesk this technology really has hit the mainstream. The track was well attended with approximately half of the attendees when poled where actually involved in implementing additive manufacture and a significant proportion using it in production.

There was in general a significant overlap between many of the sessions, this technology has now become so mainstream that rather than seeing new concepts we are seeing like mainstream CAD more of an emphasis on specific product implementations and GUIs.

The morning session was kicked off by Sophie Jones, General Manager of Added Scientific a specialist consultancy with strong academic research links who investigate future technologies. This really was futures stuff rather than the mainstream covering 3D-printing of tailored pharmaceuticals and healthcare electronics.

Kieron Salter from KWSP then talked about some of their user case studies, as a specialist consultancy they’ve been needed by some customers to bridge the gaps in understanding. In particular, some of their work in the Motorsports sector was particularly interesting as cutting-edge novel automotive design.

Jesse Blankenship from Frustum gave a nice overview of their products and their integration into Solid Edge, Siemens NX and Onshape but he also showed the developer tools and GUIs that other CAD vendors and third-parties can use to integrate generative design technologies. In the world of CAD components, Frustum look well-placed to become a key component vendor.

Andy Roberts from Desktop Metal gave a rather beautiful demonstration walking through the generative design of a part, literally watching the iteration from a few constraints to an optimised part. This highlighted how different many of these parts can be compared to traditional techniques.

The afternoon’s schedule started with a bonus session that hadn’t made the printed schedule from Johannes Mann of Volume Graphics. It was a very insightful overview of the challenges in fidelity checking additive manufacturing and simulations on such parts (including some from Airbus).

Bradley Rothenberg of nTopology reappeared to elaborate on his keynote demo and covered some of the issues for quality control and simulation for generative design that CAM/CAE have solved for conventional manufacturing techniques.

Autodesk’s Andy Harris’ talk focused on how AM was enabling new genres of parts that simply aren’t feasible via other techniques. The complexity and quality of some of the resulting parts were impressive and often incredibly beautiful.

Dassault’s session was given by a last-minute speaker substitution of David Reid; I haven’t seen David talk before and he’s a great speaker. It was great to see a session led from the Simulia side of Dassault and how their AM technology integrates with their wider products. A case study on Airbus’ choice and usage of Simulia was particularly interesting as it covered how even the most safety critical, traditional big manufacturers are taking AM seriously and successfully integrating it into their complex PLM and regulatory frameworks.

The final session of the day was probably my personal favourite, Louise Geekie from Croft AM gave a brilliant talk on metal AM but what made it for me was her theme of understanding when you shouldn’t use AM and it’s limitations – basically just because you can… should you? This covered long term considerations on production volumes, compromises on material yield for surface quality, failure rates and costs of post-production finishing. Just because a part has been designed by engineering optimisation doesn’t mean an end user finds it aesthetically appealing – the case where a motorcycle manufacturer and indeed wants the front fork to “look” solid.

Overall my key takeaways were:

·       Just because you can doesn’t mean you should, choosing AM requires an understanding of the limitations and compromises and an overall plan if volume manufacture is an issue

·       The big CAD players are involved but there’s still work to be done to harden the surrounding frameworks in particular reliable simulation, search, fidelity testing.

·       How well the surrounding products and technologies handle the types of topologies and geometries GM throws out will be interesting. In particular it’ll be interesting to watch how Siemens Syncronous Technology and direct modellers cope, and the part search engines such as Siemens Geolus too.

·       Generative manufacture is computationally heavy and the quality of your CPU and GPU is worth thinking about.

Hardware OEMS and CPU/GPU Vendors taking CAD/PLM seriously

These new technologies are all hardware and computationally demanding compared to the modelling kernels of 20 years ago. AMD were showcasing and talking about all the pro-viz, rendering and cloud graphics technologies you’d expect but it was pleasing to see their product and solution teams and those from Dell, Intel, HP etc talking about computationally intensive technologies that benefit from GPU and CPU horse power such as CAE/FEA and of course generative design. It’s been noticeable in recent years in the increasing involvement and support from hardware OEMs and GPU vendors for end-user and ISV CAD/Design events and forums such as COFES, Siemens PLM Community and Dassault’s Community of Experts; which should hopefully bode well for future platform developments in hardware for CAD/Design.

Afterthoughts

A few weeks ago Al Dean from Develop3D wrote an article (bordering on a rant) about how poorly positioned a lot of the information around generative design (topology optimisation) and it’s link to additive manufacture is. I think many reading, simply thought – yes!

After reading it – I came to the conclusion that many think generative design and additive manufacture are inextricably linked. Whilst they can be used in conjunction there are vast numbers of use cases where the use of only one of the technologies is appropriate.

Generative design in my mind is computationally optimising a design to some physical constraints – it could be mass of material, or physical forces (stress/strain) and could include additional constraints – must have a connector like this in this area, must be this long or even must be tapered and constructed so it can be moulded (include appropriate tapers etc – so falls out the mold).

Additive manufacture is essentially 3-D printing, often metals. Adding material rather than the traditional machining mentality of CAD (Booleans often described as target and tool) – removing stuff from a block of metal by machining.

My feeling is generative design far greater potential for reducing costs and optimising parts for traditional manufacturing techniques e.g. 3/5-axis G-code like considerations, machining, injection molding than has been highlighted. Whilst AM as a prototyping workflow for those techniques is less mature than it could be as the focus has been on these weird and wonderful organic parts you couldn’t make before without AM/3-D Printing.

AWS and NICE DCV – a happy marriage! … resulting in a free protocol on AWS

Rachel Berrys Virtually Visual blog - Thu, 05/03/2018 - 13:12

It’s now two years since Amazon bought NICE and their DCV and EnginFrame products. NICE were very good at what they did. For a long time they were one of the few vendors who could offer a decent VDI solution that supported Linux VMs, with a history in HPC and Linux they truly understood virtualisation and compute as well as graphics. They’d also developed their own remoting protocol akin to Citrix’s ICA/HDX and it was one of the first to leverage GPUs for tasks like H.264 encode.

Because they did Linux VMs and neither Citrix nor VMware did, NICE were often a complementary partner rather than a competitor although with both Citrix and VMware adding Linux support that has shifted a little. AWS promised to leave NICE DCV products alone and have been true to that. However the fact Amazon now owns one of the best and experience protocol teams around has always raised the possibility they could do something a bit more interesting than most other clouds.

Just before Xmas in December 2017 without much fuss or publicity, Amazon announced that they’d throw NICE DVC in for free on AWS instances.

NICE DCV is a well-proven product with standalone customers and for many users offers an alternative to Citrix/VMware offerings; which raises the question why run VMware/Citrix on AWS if NICE will do?

There are also an awful lot of ISVs looking to offer cloud-based services and products including many with high graphical demands. To run these applications well in the cloud you need a decent protocol, some have developed their own which tend to be fairly basic H.264, others have bought in technology from the likes of Colorado Code Craft or Teradici’s standalone Cloud Access Software based around the PCoIP protocol. Throwing in a free protocol removes the need to license a third-party such as Teradici, which means the overall solution cost is cut but with no impact on the price AWS get for an instance. This could be a significant driver for ISVs and end-users to choose AWS above competitors.

Owning and controlling a protocol was a smart move on Amazon’s part, a key element of remoting and the performance of a cloud solution, it makes perfect sense to own one. Microsoft and hence Azure already have RDS/RDP under their control. Will we see moves from Google or Huawei in this area?

One niggle is that many users need not just a protocol but a broker, at the moment Teradici and many do not offer one themselves and users need to go to another third-party such as Leostream to get the functionality to spin-up and manage the VMs. Leostream have made a nice little niche supporting a wide range of protocols. It turns out that AWS are also offering a broker via the NICE EnginFrame technologies, this is however an additional paid for component but the single vendor offering may well appeal. It was really hard to find this out, I had to contact the AWS product managers for NICE to be certain. I really couldn’t work out what was available from the documentation and product overviews from AWS (in the end I had to contact the product management team directly).

Teradici do have a broker in-development, the details of which they discussed with Jack on brianmadden.com.

So, today there is the option of a free protocol and paid for broker (NICE+EngineFrame alibi tied to AWS) and soon there will be a paid protocol from Teradici with a broker thrown in, the protocol is already available on the AWS marketplace.

This is just one example of many where cloud providers can take functionality in-house and boost their appeal by cutting out VDI, broker or protocol vendors. For those niche protocol and broker vendors they will need to offer value through platform independence and any-ness (the ability to choose AWS, Azure, Google Cloud) against out of the box one-stop cloud giant offerings. Some will probably succeed but a few may well be squeezed. It may indeed push some to widen their offerings e.g. protocol vendors adding basic broker capabilities (as we are seeing with Teradici) or widening Linux support to match the strong NICE offering.

In particular broker vendor Leostream may be pushed, as other protocol vendors may well follow Teradici’s lead. However, analysts such as Gabe Knuth have reported for many years on Leostream’s ability to evolve and add value.

We’ve seen so many acquisitions in VDI/Cloud where a good small company gets consumed by a giant and eventually fails, the successful product dropped and the technologies never adopted by the mainstream business. AWS seem to have achieved the opposite with NICE, continuing to invest in a successful team and product whilst leeraging exactly what they do best. What a nice change! It’s also good to see a bit more innovation and competition in the protocol and broker space.

Open-sourced Virtualized GPU-sharing for KVM

Rachel Berrys Virtually Visual blog - Thu, 03/22/2018 - 12:05

About a month ago Jack Madden’s Friday EUC news-blast (worth signing-up for), highlighted a recent  announcement from AMD around open-sourcing their GPU drivers for hardware shared-GPU (MxGPU) on the open-source KVM hypervisor.

The actual announcement was made by Michael De Neffe on the AMD site, here.

KVM is an open source hypervisor, favoured by many in the Linux ecosystem and segments such as education. Some commercial hypervisors are built upon KVM adding certain features and commercial support such as Red Hat RHEL. Many large users including cloud giants such as Google, take the open source KVM and roll their own version.

There is a large open source KVM user base who are quite happy to self-support, including a large academic research community. Open-sourced drivers enable both vendors and others to innovate and develop specialist enhancements. KVM is also a very popular choice in the cloud OpenStack ecosystem.

As far as I know, this is the first open-sourced GPU sharing technology available to the open source KVM base. AMD’s hardware sales model also suits this community well with no software license of compulsory support; a model paralleling how CPUs/servers are purchased.

Shared GPU reduces the cost of providing graphics and suits the economies of scale and cost demanded in Cloud well. I imagine for the commercial and cloud based KVM hypervisors, ready access to drivers can only help accelerate and smooth their development on top of KVM.

The drivers are available to download here:

https://support.amd.com/en-us/download/workstation?os=KVM# . Currently there are only guest drivers for Windows OSs. However being open source, this opens up the possibility for a whole host of third-parties to develop variants for other platforms.

There is also an AMD community forum where you can ask more questions if this is a technology of interest to you and read the various stacks and applications other users are interested in.

Significant announcements for AR/VR for the CAD / AEC Industries

Rachel Berrys Virtually Visual blog - Fri, 03/09/2018 - 16:22
Why CAD should care about AR/VR?

VR (Virtual Reality) is all niche headsets and gaming? Or putting bunny ears on selfies… VR basically has a marketing problem. Looks cool but for many in enterprise it seems a niche technology to preview architectural buildings etc. In fact, the use cases are far wider if you get passed those big boxy headsets. AR (Augmented Reality) is essentially bits of VR on top of something see-through. There’s a nice overview video of the Microsoft Hololens from Leila Martine at Microsoft, including some good industrial case studies (towards the end of the video), here. Sublime have some really insightful examples too, such as a Crossrail project using AR for digital twin maintenance.

This week there have been some _really_ very significant announcements from two “gaming” engines, Unity and the Unreal Engine (UE) from Epic. The gaming engines themselves take data about models (which could be CAD/AEC models) together with lighting and material information and put it all together in a “game” which you can explore – or thinking of it another way they make a VR experience. Traditionally these technologies have been focused on gaming and film/media (VFX) industries. Whilst these games can be run with a VR headset, like true games they can be used on a big screen for collaborative views.

Getting CAD parts into gaming engines has been very fiddly:
  • The meshed formats in VFX industries are quite different from those generated in CAD.
  • Enterprise CAD/AEC user are also unfamiliar with the very complex VFX industry software used to generate lighting and materials.
  • CAD / AEC parts are frequently very large and with multiple design iterations so a large degree of automation is needed to fix them up repeatedly (or a lot of manual hard work)
  • Large engineering projects usually consist of thousands of CAD parts, in different formats from different suppliers

Many have focused on the Autodesk FBX ecosystem and 3DS Max, who with tools like their Slate materials editor allowed the materials/lighting information to be added to the CAD data.  This week both Unreal and Unity announced what amounts to end-to-end solutions for a CAD to VR pipeline.

Unreal Engine

Last year at Siggraph in July 2017, Epic announced Datasmith for 3DS Max with the inference of another 20 or so formats to follow (they were listed on the initial beta sign-up dropdown) including ESRI, Solidworks, Revit, Rhino, Catia, Autodesk, Siemens NX, Sketchup; the website today lists fewer but more explicitly, here. This basically promises the technology to get CAD data from multiple formats/sources into a form suitable for VFX.

This week they followed it up with the launch of a beta of Unreal Studio. Develop3D have a good overview of the announcement, here.  This reminds a lot of the slate editor in 3DS Max, and it looks sleek enough that your average CAD/AEC user could probably use without significant training (there are a lot of tutorial resources). With an advertised launch price of $49 per month it’s within the budget of your average small architectural firm and the per month billing makes it friendly to project based billing.

Epic are taking on a big task to deliver the end-to-end solution themselves, but they seem to know what they are doing. Watching their hiring website over the last six months they seem to have been hiring a large number of staff both in development (often in Canada) but also sales/business for these projects (hint: the roles often tagged with enterprise – so easy to spot). Over the last couple of years they’ve also built up a leadership team for these project including Marc Petit, Simon Jones and Christopher Murray and it’s worth reviewing the marketing material those folks are putting out.

Unity Announcement

On the same day as the UE announcement Unity countered with an announcement of providing a similar end-to-end solution via a partnership with PiXYZ, a small but specialist CAD toolkit provider.

Whilst the beta is not yet released, PiXYZ existing offerings look a very good and established technology match. Their website is remarkably high on detail of specific functionality and it looks good. PiXYZ Studio for example has all the mesh fix up tools you’d like for cleaning up CAD data for visualisation and VFX. PiXYZ Pipeline seems to cover all your import needs I’ve heard credible rumours that a lot of the CAD focused functionality is built on top of some of the most robust industry licensed toolkits, so the signs are positive that this will be a robust, mature solution rather fast. This partnership seems to place Unity in a position to match the Datasmith UE offering.

It’s less clear what Unity will provide on the materials / lighting front, but I imagine something like the Unreal Studio offering will be needed.

What did we learn from iRay and vRay in CAD

Regarding static rendering in VFX land: vRay, Renderman, Arnold and iRay compete, with iRay taking a fairly small share. However, via strong GPU, hardware and software vendor partnerships iRay has become the dominant choice in enterprise CAD (e.g. Solidworks Visualize etc). CAD loves to standardise and so it will be interesting if a similar battle of Unity vs Unreal will unfold with and eventual dominant force.

Licensing and vendor lock-in

This has all been enabled by the shift in licensing models of the gaming engines demonstrating they are serious about the enterprise space. For gaming a game manufacturer would pay a percentage such as 9% to use a gaming engine to create their game. This makes no sense in the enterprise space to integrate against a gaming engine which is a tiny additional feature on the overall CAD/PLM deployment. So, you will see lots of headlines about “Royalty Free” offerings, the revenues are in the products such as Datasmith and Studio. The degree to which both vendors rely on 3rd party toolkits and libraries under the hood e.g. CAD translators, the PiXYZ functionality etc will also dictate the profitability via how much Unreal or Unity have to pay in licensing costs.

These single vendor / ecosystem pipelines are attractive but relying on the gaming engine provider for the CAD import and materials could potentially lead to lock-in which always makes some customers nervous. Having done all the work of converting CAD data into something fit for rendering and VR I could see the attraction of being able to output it to iRay, Unity or Unreal, which of course is the opposite of what these products are.

Opportunities

There’s a large greenfield virgin market in CAD/AEC of customers who have very limited or no use of visualisation. Whilst the large AEC firms may have little pockets of specialist VFX, your average 10 man architecture firm doesn’t, like wise for the bulk of the Solidworks base. This technology looks simple enough for those users but I suspect uptake by SMBs may be slower than you might presume because for projects won on the lowest-bid why add a VR/AR/professional render component if Sketchup or similar is sufficient?

In enterprise CAD, AEC and GIS there are already VR users with bespoke solutions and strong specialist software offerings (often expensive) and it will be interesting to see the dynamics between these mass-market offerings and the established high-end vendors such as ESI.io or Optis.

These announcements are also setting Unity and Unreal up to start nibbling into the VFX, film and media ecosystems where specialist complex materials and lighting products are used. For many in AEC/CAD these products are a bit overkill. A lot of these users are likely to be less inclined to build their own materials and simply want libraries mapping the CAD materials (“this part is Steel”) to the VFX materials (“this is Steel and Steel should behave like this in response to light”). In the last month or so we’ve seen UE also move into traditional VFX territory with headlines such as “Visually Stunning Animated Feature ‘Allahyar and the Legend of Markhor’ is the First Produced Entirely in Unreal Engine” and Zafari – a new children’s cartoon TV series made using UE.

 

I haven’t seen any evidence of any integrations with the CAD materials ecosystems bridging that CAD materials (“this part is Steel”) to the VFX materials (“this is Steel and Steel should behave like this in response to light”) part of the solution. If this type of solution becomes mainstream it would be nice to see the material specialists (e.g. Granta Design) and CAD catalogues (e.g. Cadenas) carry information about how VFX type visualisation should be done based on the engineering material data. One to look out for.

 

Overall, I’m very interested about these announcements, lots of sound technology and use cases but whether the mass market is quite over the silly VR headset focus just yet…. we’ll soon find out J.

 

 

 

Looking at the Hyper-V Event Log (January 2018 edition)

Microsoft Virtualisation Blog - Tue, 01/23/2018 - 22:57

Hyper-V has changed over the last few years and so has our event log structure. With that in mind, here is an update of Ben’s original post in 2009 (“Looking at the Hyper-V Event Log”).

This post gives a short overview on the different Windows event log channels that Hyper-V uses. It can be used as a reference to better understand which event channels might be relevant for different purposes.

As a general guidance you should start with the Hyper-V-VMMS and Hyper-V-Worker event channels when analyzing a failure. For migration-related events it makes sense to look at the event logs both on the source and destination node.

Below are the current event log channels for Hyper-V. Using “Event Viewer” you can find them under “Applications and Services Logs”, “Microsoft”, “Windows”.
If you would like to collect events from these channels and consolidate them into a single file, we’ve published a HyperVLogs PowerShell module to help.

Event Channel Category Description Hyper-V-Compute Events from the Host Compute Service (HCS) are collected here. The HCS is a low-level management API. Hyper-V-Config This section is for anything that relates to virtual machine configuration files. If you have a missing or corrupt virtual machine configuration file – there will be entries here that tell you all about it. Hyper-V-Guest-Drivers Look at this section if you are experiencing issues with VM integration components. Hyper-V-High-Availability Hyper-V clustering-related events are collected in this section. Hyper-V-Hypervisor This section is used for hypervisor specific events. You will usually only need to look here if the hypervisor fails to start – then you can get detailed information here. Hyper-V-StorageVSP Events from the Storage Virtualization Service Provider. Typically you would look at these when you want to debug low-level storage operations for a virtual machine. Hyper-V-VID These are events form the Virtualization Infrastructure Driver. Look here if you experience issues with memory assignment, e.g. dynamic memory, or changing static memory while the VM is running. Hyper-V-VMMS Events from the virtual machine management service can be found here. When VMs are not starting properly, or VM migrations fail, this would be a good source to start investigating. Hyper-V-VmSwitch These channels contain events from the virtual network switches. Hyper-V-Worker This section contains events from the worker process that is used for the actual running of the virtual machine. You will see events related to startup and shutdown of the VM here. Hyper-V-Shared-VHDX Events specific to virtual hard disks that can be shared between several virtual machines. If you are using shared VHDs this event channel can provide more detail in case of a failure. Hyper-V-VMSP The VM security process (VMSP) is used to provide secured virtual devices like the virtual TPM module to the VM. Hyper-V-VfpExt Events form the Virtual Filtering Platform (VFP) which is part of the Software Defined Networking Stack. VHDMP Events from operations on virtual hard disk files (e.g. creation, merging) go here.

Please note: some of these only contain analytic/debug logs that need to be enabled separately and not all channels exist on Windows client. To enable the analytic/debug logs, you can use the HyperVLogs PowerShell module.

Alles Gute,

Lars

Categories: Microsoft, Virtualisation

A smaller Windows Server Core Container with better Application Compatibility

Microsoft Virtualisation Blog - Mon, 01/22/2018 - 19:04

In Windows Server Insider Preview Build 17074 released on Tuesday Jan 16, 2018, there are some exciting improvements to Windows Server containers that we’d like to share with you.  We’d love for you to test out the build, especially the Windows Server Core container image, and give us feedback!

Windows Server Core Container Base Image Size Reduced to 1.58GB!

You told us that the size of the Server Core container image affects your deployment times, takes too long to pull down and takes up too much space on your laptops and servers alike.  In our first Semi-Annual Channel release, Windows Server, version 1709, we made some great progress reducing the size by 60% and your excitement was noted.  We’ve continued to actively look for additional space savings while balancing application compatibility. It’s not easy but we are committed.

There are two main directions we looked at:

1)      Architecture optimization to reduce duplicate payloads

 We are always looking for way to optimize our architecture. In Windows Server, version 1709 along with the substantial reduction in Server Core container image, we also made some substantial reductions in the Nano Server container image (dropping it below 100MB).  In doing that work we identified that some of the same architecture could be leveraged with Server Core container. In partnership with other teams in Windows, we were able to implement changes in our build process to take advantage of those improvements.  The great part about this work is that you should not notice any differences in application compatibility or experiences other than a nice reduction in size and some performance improvements.

2)      Removing unused optional components

We looked at all the various roles, features and optional components available in Server Core and broke them down into a few buckets in terms of usage:  frequently in containers, rarely in containers, those that we don’t believe are being used and those that are not supported in containers.  We leveraged several data sources to help categorize this list. First, those of you that have telemetry enabled, thank you! That anonymized data is invaluable to these exercises. Second was publicly available dockerfiles/images and of course feedback from GitHub issues and forums.  Third, the roles or features that are not even supported in containers were easy to make a call and remove. Lastly, we also removed roles and features we do not see evidence of customers using.  We could do more in this space in the future but really need your feedback (telemetry is also very much appreciated) to help guide what can be removed or separated.

So, here are the numbers on Windows Server Core container size if you are curious:

  • 1.58GB, download size, 30% reduction from Windows Server, version 1709
  • 3.61GB, on disk size, 20% reduction from Windows Server, version 1709

MSMQ now installs in a Windows Server Core container

MSMQ has been one of the top asks we heard from you, and ranks very high on Windows Server User Voice here. In this release, we were able to partner with our Kernel team and make the change which was not trivial. We are happy to announce now it installs! And passed our in-house Application Compatibility test. Woohoo!

However, there are many different use cases and ways customers have used MSMQ. So please do try it out and let us know if it indeed works for you.

A Few Other Key App Compatibility Bug Fixes:

  • We fixed the issue reported on GitHub that services running in containers do not receive shutdown notification.

https://github.com/moby/moby/issues/25982

  • We fixed this issue reported on GitHub and User Voice related to BitLocker and FDVDenyWriteAccess policy: Users were not able to run basic Docker commands like Docker Pull.

https://github.com/Microsoft/Virtualization-Documentation/issues/530

https://github.com/Microsoft/Virtualization-Documentation/issues/355

https://windowsserver.uservoice.com/forums/304624-containers/suggestions/18544312-fix-docker-load-pull-build-issue-when-bitlocker-is

  • We fixed a few issues reported on GitHub related to mounting directories between hosts and containers.

https://github.com/moby/moby/issues/30556

https://github.com/git-for-windows/git/issues/1007

We are so excited and proud of what we have done so far to listen to your voice, continuously optimize Server Core container size and performance, and fix top application compatibility issues to make your Windows Container experience better and meet your business needs better. We love hearing how you are using Windows containers, and we know there is still plenty of opportunities ahead of us to make them even faster and better. Fun journey ahead of us!

Thank you.

Weijuan

Categories: Microsoft, Virtualisation

IoT Lifecycle attacks – lessons learned from Flash in VDI/Cloud

Rachel Berrys Virtually Visual blog - Wed, 08/23/2017 - 12:55
There are lots of parallels between cloud/vdi deployments and “the Internet of Things (IoT)”, basically they both involve connecting an end-point to a network.

One of the pain points in VDI for many years has been Flash Redirection. Flash is a product that it’s makers Adobe seem to have been effectively de-investing in for years. With redirection there is both server and client software. Adobe dropped development for Linux clients many years ago, then surprisingly resurrected it late last year (presumably after customer pressure). Adobe have since said they will kill the Flash player on all platforms in 2020.

Flash was plagued by security issues and compatibility issues (client versions that wouldn’t work with certain server versions). In a cloud/VDI environment the end-points and cloud/data center are often maintained by different teams or even companies. This is exactly the same challenge that the internet of things faces. A user’s smart lightbulb/washing machine is bought with a certain version of firmware, OEM software etc. and how it is maintained is a challenge.

It’s impossible for vendors to develop products that can predict the architecture of future security attacks and patches are frequent. Flash incompatibility often led to VDI users using registry hacks to disable the version matching between client and server software, simply to keep their applications working. When Linux Flash clients were discontinued, it left users unsupported as Adobe no longer developed the code and VDI vendors were unable to support closed source Adobe code.

The Flash Challenges for The Internet of Things
  • Customers need commitments from OEMs and software vendors for support matrices, how long a product will be updated/maintained.
  • IoT vendors need to implement version checking to protect end-clients/devices being downgraded to vulnerable versions of firm/software and life-cycle attacks.
  • In the same way that VDI can manage/patch end-points, vendors will need to implement ways to manage IoT end-points
  • What happens to a smart device if the vendor drops support / goes out of business. Is the consumer left with an expensive brick. Can it even be used safely?

There was a recent article in the Washington Post on Whirlpool’s lack of success with a connected washing machine, it comes with an app to allow you to “allocate laundry tasks to family members” and share “stain-removing tips with other users”. With the uptake low, it raises the question how long will OEMs maintain and services like applications. Many consumer devices such as washing machines are expect to last 5+ years. Again, this is a challenge VDI/Cloud has largely solved for thin-clients, devices with long 5-10 year refresh cycles.

Pages

Subscribe to Spellings.net aggregator - Virtualisation