Industry news

New - Latest EPA Libraries

Netscaler Gateway downloads - Fri, 09/14/2018 - 18:30
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

Thunderbolt end-user experience macOS vs. Windows

Aaron Parker's stealthpuppy - Fri, 09/14/2018 - 10:12

Thunderbolt 3 (and USB-C) are here to provide a single cable for everything, although your experience with this technology will differ depending on your choice of operating system. Here’s a quick look at the end-user experience of TB on macOS and Windows.

Thunderbolt 3 on macOS

Thunderbolt on macOS just works – plug-in a TB device and off you go. This makes sense given that the standard was designed by Intel and Apple. Unpacking and plugging in a Thunderbolt dock with external displays, ethernet, audio etc., on macOS in just about every case will work without installing drivers.

Thunderbolt ports on the MacBook Pro

Here’s Apple’s dirty (not so) secret though – excluding the MacBook Air (and the Mini that comes with TB2), all current Macs have TB3 ports, except for the MacBook. It has a single USB-C port only. Maybe that’s OK – the TB target market is likely to be purchasing the Pro line anyway, but Apple isn’t a fan of labelling their ports, so caveat emptor.

macOS provides a good look at the devices plugged into your TB ports:

macOS System Report showing Thunderbolt devices

Note that while the MacBook Pro with Touch Bar has 4 Thunderbolt 3 ports, these are divided across 2 busses. If you have more than one device plugged in, ensure they’re plugged into either side of the laptop for best performance.

Thunderbolt 3 on Windows

Thunderbolt 3 on Windows 10? That is unfortunately not so straight-forward. 

I’ve been testing connection to my dock on an HP Elitebook x360 G2 that comes equipped with 2 x TB3 ports. The default Windows 10 image for this machine is an absolute mess that has a whole lot of software that isn’t required. Resetting the machine back to defaults strips it right back to the bare essentials, excluding the Thunderbolt driver and software. After plugging in a TB device, it isn’t recognised and no driver or software is downloaded from Windows Update. Interestingly, no driver or software was offered by the HP Support Assistant app designed to help end-users keep their HP PCs up to date.

Windows PCs equipped with Thunderbolt ports will have the driver and software installed by default, so typically this won’t be an issue; however, if you’re resetting the PC or creating a corporate image, you’ll need to install that software. Every OEM should supply Thunderbolt software for download, which for HP PCs is listed as Intel Thunderbolt 3 Secure Connect. The software is actually provided by Intel and available in various downloads on their site.

With the software installed and a device plugged in, the user sees a message box asking to approve the connection to a Thunderbolt device. Management actions such as approving or removing a device requires administrator rights on the PC. Pluggable has a good article on the entire user experience and troubleshooting.

Approving connection to TB devices on Windows 10

Once approved, the device can then be viewed and managed. 

Viewing attached TB devices on Windows 10

Of course, once plugged in, Windows sees the peripherals and connects to them as usual.

Peripherals plugged into a TB dock on Windows 10

Thunderbolt on Windows isn’t as simple as it could be. It would be great to see drivers installed directly from Windows Update instead of being available separately, but once installed everything works as you would expect.

Wrap-up

Thunderbolt will unlikely see as wide spread adoption as USB 3.1, but users with specialised requirements such as video editors, CAD, etc., will benefit from the available bandwidth, which today is 40 Gbit/s vs. 10 Gbit/s. Early USB 3.2 hardware with 20 Gbit/s speeds has been demonstrated recently and this may further reduce the need for some users to go to devices providing the higher bandwidth.

The end-user experience of TB on macOS vs. Windows 10 is kind of disappointing – Windows requires that you install drivers and the software requires administrative rights. Not an ideal experience for home or SMB users and these requirements might preclude the usage of Thunderbolt in enterprise environments. However my own personal experience on a MacBook is pretty awesome – just plug in and go. Looks like I’ll be on macOS for the foreseeable future.

Update

Microsoft has an article on enabling Kernel DMA Protection for Thunderbolt 3. This requires Windows 10 1803 or above and must also be supported by the device drivers.

Photo by Linda Xu

This article by Aaron Parker, Thunderbolt end-user experience macOS vs. Windows appeared first on Aaron Parker.

Categories: Community, Virtualisation

PackMan in practice

The Iconbar - Fri, 09/14/2018 - 08:00
For this first article looking at how to create PackMan/RiscPkg packages, I've decided to use my SunEd program as a guinea pig. Being a simple C application with no dependencies on other packages, it'll be one of the most straightforward things on my site to get working, and one of the easiest for other people to understand.

Read on to discover how to turn simple apps like SunEd into RiscPkg packages, and more importantly, how to automate the process.

Building your first package, the PackIt way

The RiscPkg policy manual is a rather dry document, so the easiest way of getting your first package built is to use the PackIt tool created by Alan Buckley. After loading PackIt, you can simply drag an application to its iconbar icon and up will pop a window to allow you to enter all the extra details that RiscPkg needs to know.

PackIt's package creation wizard

Once everything is filled in correctly, opening the menu and selecting the "Save" option should allow you to save out the resulting package zip file.

PackIt's output

... except that the current version of PackIt seems to save it out with the wrong filetype. No problem, just manually set the type to 'zip' or 'ddc' and things look a lot better:

The actual package content

Pretty simple, isn't it? The !SunEd app has been placed inside an Apps.File directory (mirroring the default install location for the application on the user's hard disc), while the information that was entered into PackIt's wizard has been saved to the RiscPkg.Control and RiscPkg.Copyright files.

The Control and Copyright files

Control is a simple text file containing the package metadata (the structure of which is the subject of much of the RiscPkg policy document), while Copyright is a verbatim copy of the copyright message you entered into PackIt's window.

PackIt's Copyright tab

Now that you have a package built, you can easily test it out by dragging it to PackMan's iconbar icon. PackMan will then go through the usual installation procedure, just as if it was a package you'd selected to install from the Internet.

Loading the package in PackMan

Automating package building

Filling in PackIt's wizard once the first time you create a package for an app is all well and good, but what about when you want to release an update for the package? Entering the information all over again is going to waste your time and introduce the risk of making mistakes.

Most C/C++ developers are already familiar with using makefiles to build their programs. With a bit of effort, it's possible to create makefiles which can also automate creation of the corresponding RiscPkg package.Before

After a brief bit of preparation, the 2003-vintage SunEd sources were tidied up and a simple makefile was written, allowing the application binary to be easily rebuilt on command.

The original SunEd source tree

CFLAGS = -Wall -mpoke-function-name -O2 -mlibscl -mthrowback -static

CC = gcc -c $(CFLAGS) -MMD
LINK = gcc $(CFLAGS)

SRCS =
suned
limp

OBJS = $(addsuffix .o, $(SRCS))

# Output file
!SunEd/!RunImage: $(OBJS)
$(LINK) -o $@ $^ -mlibscl

# Object files
%.o: %.c
$(CC) -MF d/$(basename $@) -o $@ $<

# Dependencies
-include d/*The original SunEd makefile

As a brief overview:

  • c and h contain the source code as you would expect
  • d and o are used for intermediate files: autogenerate dependencies and object files
  • !SunEd is the full app, ready for distribution, and the makefile is only used to rebuild the !RunImage
And after

Rather than bore you with all the intermediate versions, I figured it was best to just jump straight to the final version of the makefile and the adjusted source structure.

The new SunEd source tree

CFLAGS = -Wall -mpoke-function-name -O2 -mlibscl -mthrowback -static

CC = gcc -c $(CFLAGS) -MMD
LINK = gcc $(CFLAGS)

CP = copy
CPOPT = A~CF~NQR~S~T~V

SRCS =
suned
limp

APP = Apps/File/!SunEd

ROAPP = $(subst /,.,$(APP))
OBJS = $(addprefix build/,$(addsuffix .o, $(SRCS)))

# Output file
build/!RunImage: $(OBJS)
$(LINK) -o $@ $^ -mlibscl

# Object files
build/%.o: src/%.c build/dirs
$(CC) -MF build/d/$(subst /,.,$(basename $@)) -o $@ $<

# Pattern rule for injecting version numbers into files
build/%.sed: src/template/% src/Version build/dirs
sed -f src/Version $< > $@

# Explicit dependency needed for generated file build/VersionNum.sed
build/suned.o: build/VersionNum.sed

# Standard clean rule
clean:
remove binary/zip
remove source/zip
x wipe build ~CFR~V

# Binary RiscPkg archive
binary.zip: build/pkg-dir
remove binary/zip
dir build.pkg
zip -rqI9 ^.^.binary/zip *
dir ^.^

# Source zip archive
source.zip: build/src-mani makefile COPYING
remove source/zip
zip -rqI9 source/zip src makefile COPYING

all: binary.zip source.zip

build/dirs:
cdir build
cdir build.o
cdir build.d
create build.dirs

# Double-colon rules execute in the order they're listed. So placing this rule
# here makes sure that the 'build' folder exists prior to the rule below being
# executed.
build/pkg-mani:: build/dirs

# Double-colon rules with no pre-requisites always execute. This allows us to
# make sure that build/pkg-mani is always up-to-date
build/pkg-mani::
src/manigen src.pkg build.pkg-mani

# Same system as build/pkg-mani
build/src-mani:: build/dirs
build/src-mani::
src/manigen src build.src-mani

# Create the package dir ready for zipping
build/pkg-dir: build/pkg-mani build/!RunImage build/Control.sed build/!Help.sed COPYING
# Copy over the static files
x wipe build.pkg ~CFR~V
$(CP) src.pkg build.pkg $(CPOPT)
# Populate the RiscPkg folder
cdir build.pkg.RiscPkg
$(CP) build.Control/sed build.pkg.RiscPkg.Control $(CPOPT)
$(CP) COPYING build.pkg.RiscPkg.Copyright $(CPOPT)
# Populate the app folder
$(CP) build.!Help/sed build.pkg.$(ROAPP).!Help $(CPOPT)
$(CP) build.!RunImage build.pkg.$(ROAPP).!RunImage $(CPOPT)
# Create the dummy file we use to mark the rule as completed
create build.pkg-dir

# Dependencies
-include build/d/*The new SunEd makefile

As you can see, there have been a fair number of changes. Not all of them are strictly necessary for automating package creation (after all, a package is little more than a zip file), but this structure has resulted in a setup that helps to minimise the amount of work I'll need to do when preparing new releases. The setup should also be easily transferrable to the other software I'll be wanting to package.What it does

  • The clean rule reduces things to the state you see above
  • The source.zip rule builds a source archive, containing exactly what you see above
  • The binary.zip rule builds the RiscPkg archive, performing the following operations to get there:
    • A copy of the src.pkg folder is made, in order to provides the initial content of the package zip - essentially, the static files which aren't modified/generated by the build.
    • As you'd expect, the !RunImage file gets built and inserted into the app. But that's not all!
    • The src.Version file is actually a sed script containing the package version number and date:

      s/__UPSTREAM_VERSION__/2.33/g
      s/__PACKAGE_VERSION__/1/g
      s/__RISCOSDATE__/28-Aug-18/gThe src.Version file

      This sed script is applied to src.template.!Help to generate the help file that's included in the package, src.template.Control to generate the RiscPkg.Control file, and src.template.VersionNum. By driving all the version number / date references off of this one file, there won't be any embarrassing situations where a built program will display one version number in one location but another version number in another location.

    • src.template.VersionNum is a C header file, which is used to inject the app version and date into !RunImage.
    • The COPYING file in the root used as the RiscPkg.Copyright file in the package.
  • All the intermediate files will be stored in a build folder, which helps keen the clean and source.zip rules simple.
  • Full dependency tracking is used for both the source.zip and binary.zip targets - adding, removing, or changing any of the files in src.pkg (or anywhere else, for source.zip) will correctly result in the resulting target being rebuilt. This is achieved without introducing any situations where the targets are redundantly built - so a build system which tries to build tens or hundreds of packages won't be slowed down.
manigen

There are also a few extra files. The src.notes folder is a collection of notes from my reverse-engineering of the SunBurst save game format, which I've decided to include in the source archive just in case someone finds it useful. But that's not really relevant to this article.

manigen, on the other hand, is relevant. It's a fairly short and straightforward BASIC program, but it plugs a very large hole in make's capabilities: Make can only detect when files change, not directories. If you have a directory, and you want a rule to be executed whenever the contents of that directory changes, you're out of luck. For small projects like SunEd this isn't so bad, but for bigger projects it can be annoying, especially when all you really want to do with the files is archive them in a zip file.

Thus, manigen ("manifest generator") was born. All it does is recursively enumerate the contents of a directory, writing the filenames and metadata (length, load/exec addr, attributes) of all files to a single text file. However, it also compares the new output against the old output, only writing to the file if a change has been detected.

out%=0
ON ERROR PROCerror

REM Parse command line args
SYS "OS_GetEnv" TO args$
REM First 3 options will (hopefully) be 'BASIC --quit ""'
opt$ = FNgetopt : opt$ = FNgetopt : opt$=FNgetopt
REM Now the actual args
dir$ = FNgetopt
out$ = FNgetopt

DIM result% 1024

out%=OPENUP(out$)
IF out%=0 THEN out%=OPENOUT(out$)
mod%=FALSE

PROCprocess(dir$)
IF EOF#out%=FALSE THEN mod%=TRUE
IF mod% THEN EXT#out%=PTR#out%
CLOSE#out%
REM Oddity: Truncating a file doesn't modify timestamp
IF mod% THEN SYS "OS_File",9,out$
END

DEF PROCprocess(dir$)
LOCAL item%
item%=0
WHILE item%<>-1
SYS "OS_GBPB",10,dir$,result%,1,item%,1024,0 TO ,,,read%,item%
IF read%>0 THEN
n%=20
name$=dir$+"."
WHILE result%?n%<>0
name$=name$+CHR$(result%?n%)
n%+=1
ENDWHILE
PROCwrite(name$+" "+STR$~(result%!0)+" "+STR$~(result%!4)+" "+STR$~(result%!8)+" "+STR$~(result%!12))
IF result%!16=2 THEN PROCprocess(name$)
ENDIF
ENDWHILE
ENDPROC

DEF FNgetopt
LOCAL opt$
opt$=""
WHILE ASC(args$)>32
opt$ = opt$+LEFT$(args$,1)
args$ = MID$(args$,2)
ENDWHILE
WHILE ASC(args$)=32
args$ = MID$(args$,2)
ENDWHILE
=opt$

DEF PROCerror
PRINT REPORT$;" at ";ERL
IF out%<>0 THEN CLOSE#out%
END

DEF PROCwrite(a$)
LOCAL b$,off%
IF EOF#out% THEN mod%=TRUE
IF mod%=FALSE THEN
off%=PTR#out%
b$=GET$#out%
IF a$<>b$ THEN mod%=TRUE : PTR#out%=off%
ENDIF
IF mod% THEN BPUT#out%,a$
ENDPROCmanigen

On Unix-like OS's this is the kind of thing you could knock together quite easily using standard commands like find, ls, and diff. But the built-in *Commands on RISC OS aren't really up to that level of complexity (or at least not without the result looking like a jumbled mess), so it's a lot more sensible to go with a short BASIC program instead.

The usage of manigen in the makefile is described in more detail below.Makefile magic

Looking at each section of the makefile in detail:Pattern rules

# Object files
build/%.o: src/%.c build/dirs
$(CC) -MF build/d/$(subst /,.,$(basename $@)) -o $@ $<

The pattern rule used for invoking the C compiler has changed. Output files are placed in the build directory, and input files come from the src directory. The substitution rule is used to remove the directory separators from the filename that's used for the dependency files, so that they'll all be placed directly in build.d. If they were allowed to be placed in subdirectories of build.d, we'd have to create those subdirectories manually, which would be a hassle.

# Pattern rule for injecting version numbers into files
build/%.sed: src/template/% src/Version build/dirs
sed -f src/Version $< > $@

Another pattern rule is used to automate injection of the package version number and date into files: Any file X placed in src.template can have its processed version available as build.X/sed (or build/X.sed as a Unix path). The sed extension is just a convenient way of making sure the rule acts on the right files.build/dirs

Both of the above rules are also configured to depend on the build/dirs rule - which is used to make sure the build directory (and critical subdirectories) exist prior to any attempt to place files in there:

build/dirs:
cdir build
cdir build.o
cdir build.d
create build.dirs

The file build.dirs is just a dummy file which is used to mark that the rule has been executed.Explicit dependencies

# Explicit dependency needed for generated file build/VersionNum.sed
build/suned.o: build/VersionNum.sed

Although most C dependencies are handled automatically via the -MF compiler flag (and the -include makefile directive), some extra help is needed for build.VersionNum/sed because the file won't exist the first time the compiler tries to access it. By adding it as an explicit dependency, we can make sure it gets generated in time (although it does require some discipline on our part to make sure we keep track of which files reference build.VersionNum/sed)Double-colon rules

# Double-colon rules execute in the order they're listed. So placing this rule
# here makes sure that the 'build' folder exists prior to the rule below being
# executed.
build/pkg-mani:: build/dirs

# Double-colon rules with no pre-requisites always execute. This allows us to
# make sure that build/pkg-mani is always up-to-date
build/pkg-mani::
src/manigen src.pkg build.pkg-mani

Double-colon rules. The manigen program solves the problem of telling make when the contents of a directory have changed, but it leaves us with another problem: We need to make sure manigen is invoked whenever the folder we're monitoring appears in a build rule. The solution for this is double-colon rules, because they have two three very useful properties, which are exploited above:

  1. A double-colon rule with no pre-requisites will always execute (whenever it appears in the dependency chain for the current build target(s)). This is the key property which allows us to make sure that manigen is able to do its job.
  2. You can define multiple double-colon rules for the same target.
  3. Double-colon rules are executed in the order they're listed in the makefile. So by having a rule which depends on build/dirs, followed by the rule that depends on nothing, we can make sure that the build/dirs rule is allowed to create the build folder prior to manigen in the second rule writing its manifest into it.
Of course, we could have just used one build/pkg-mani rule which manually creates the build folder every time it's executed. But the two-rule version is less hacky, and that's kind of the point of this exercise.Creating the package directory

This is a fairly lengthy rule which does a few different things, but they're all pretty simple.

# Create the package dir ready for zipping
build/pkg-dir: build/pkg-mani build/!RunImage build/Control.sed build/!Help.sed COPYING
# Copy over the static files
x wipe build.pkg ~CFR~V
$(CP) src.pkg build.pkg $(CPOPT)
# Populate the RiscPkg folder
cdir build.pkg.RiscPkg
$(CP) build.Control/sed build.pkg.RiscPkg.Control $(CPOPT)
$(CP) COPYING build.pkg.RiscPkg.Copyright $(CPOPT)
# Populate the app folder
$(CP) build.!Help/sed build.pkg.$(ROAPP).!Help $(CPOPT)
$(CP) build.!RunImage build.pkg.$(ROAPP).!RunImage $(CPOPT)
# Create the dummy file we use to mark the rule as completed
create build.pkg-dir

Since there are many situations in which the copy command will not copy, I've wrapped up the right options to use in a variable. Care is taken to specify all the options, even those which are set to the right value by default, just in case the makefile is being used on a system which has things configured in an odd manner.

CP = copy
CPOPT = A~CF~NQR~S~T~V

In this case some of the options are redundant, since this rule completely wipes the destination directory before copying over the new files. But for bigger projects it might make sense to build the directory in a piecemeal fashion, where the extra options are needed.

Once the directory is built, the binary.zip rule can produce the resulting zip file:

# Binary RiscPkg archive
binary.zip: build/pkg-dir
remove binary/zip
dir build.pkg
zip -rqI9 ^.^.binary/zip *
dir ^.^

Note that in this case I could have merged the binary.zip and build/pkg-dir rules together, since build/pkg-dir is only used once. And arguably they should be merged together, just in case I decide to test the app by running the version that's in the build.pkg folder, but it then writes out a log file or something that then accidentally gets included in the zip when I invoke the binary.zip rule later on.

But, on the other hand, keeping the two rules separate means that it's easy to add a special test rule that copes the contents of build.pkg somewhere else for safe testing of the app. And as mentioned above, for big apps/packages it may also make sense to break down build/pkg-dir into several rules, since wiping the entire directory each time may be a bit inefficient.In closing

With a setup like the above, it's easy to automate building of packages for applications. Next time, I'll be looking at how to automate publishing of packages - generating package index files, generating the pointer file required for having your packages included in ROOL's index, and techniques for actually uploading the necessary files to your website.

No comments in forum

Categories: RISC OS

Multi Cloud-Are we all talking about the same Multi Cloud?

Theresa Miller - Thu, 09/13/2018 - 05:30

The latest buzz word of the day is multi cloud and its usage with the enterprise. Lots of confusion and speculation but what does multi cloud really mean? Are we all talking about the same thing when we say Multi cloud? Because there are different cloud services offering types the meaning of multi cloud can […]

The post Multi Cloud-Are we all talking about the same Multi Cloud? appeared first on 24x7ITConnection.

Orpheus hits crowdfunding target

The Iconbar - Tue, 09/11/2018 - 16:26
In July, Orpheus announced their plan to crowdfund their new project.

With their usual modesty, they quietly recently updated their website to say the Company had raised the target figure and work has begun. Excellent news for RISC OS market and for their customers.....

On a personal note, my 6 year old router had issues over the weekend. Richard Brown from Orpheus was on the phone sorting it out at 9am on Saturday morning and helping me to sort out a replacement router asap.....

Orpheus Internet website

No comments in forum

Categories: RISC OS

Your VMworld US 2018 Recap, Announcements and Sessions

Theresa Miller - Tue, 09/11/2018 - 05:30

VMware took the stage once again in Las Vegas in August 2018 as another VMworld came and went which was loaded with announcements and content.  Lots of updates were shared for existing products as well as new products and even a brand new acquisition.  Not only were there lots of technical content and and update […]

The post Your VMworld US 2018 Recap, Announcements and Sessions appeared first on 24x7ITConnection.

RISC OS interview with Jerverm Vermeulen

The Iconbar - Fri, 09/07/2018 - 05:53
This time, it is our pleasure to interview Jerverm Vermeulen, who has just released a RISC OS remake of the old BBC Micro game Dickie Brickie, which is now free on !Store.

Would you like to introduce yourself?
My name is Jeroen Vermeulen and I’m from The Netherlands. Recently I’ve remade the BBC Micro game Dickie Brickie for RISC OS which is available from the PlingStore.

How long have you been using RISC OS?
I’ve used RISC OS way back in the past and only quite recently came back to it. My experience with RISC OS started when I bought a Acorn A3000 in mid 1990. It was followed up with an A4000 which I used until around 1998. I then left the RISC OS scene. Shortly after the Raspberry Pi was introduced and RISC OS was available for it I started to play around with it again. Nothing too serious until mid last year when I decided to pick up programming again and do programming on RISC OS as well. Before I owned an A3000, me and my brother owned a BBC Micro from around 1985.

What other systems do you use?
Windows 10 PC/laptop, Apple iPad.

What is your current RISC OS setup?
RPI 2B with Pi Zero Desktop and SSD. Next to that I use emulators on Windows 10 like RPCEMU, Arculator, VA5000.

What do you think of the retro scene?
I very much love the RISCOS as well as the BBC Micro retro scene. For RISC OS for example I find it amazing what Jon Abbott has been doing with ADFFS. For the BBC Micro I’m finally able to collect programs I once only could read about and have a play with it. Some of the new software that appears for the BBC Micro is extraordinary and I find it very interesting to follow the stardot.org.uk forums with people like kieranhj, tricky, sarahwalker, robc to name but a few doing some wonderful things with the machine and making it work under emulation as well.

Do you attend any of the shows and what do you think of them?
No (not yet), but I follow the show reports via sites like Iconbar and Riscository. When available I even watch some of the show’s videos. I like it the reports/videos are online and they do give some valuable extra/background information if you’ve not been there. As well as put some faces with the names you otherwise only read about 😊

What do you use RISC OS for in 2018 and what do you like most about it?
Programming. I very much like the fact that e.g. AMCOG and Drag’nDrop programs are available and sources are “open” and thus can be studied to learn from. This and the AMCOG Dev Kit allows you to do things that normally would cost more time othwerwise. It’s is the reason why I decided to distribute the sources with the Dickie Brickie game as well, just in case…

Retro kind of things like running games and other programs. On my PC I have an application called LaunchBox which allows RISC OS and BBC Micro programs to be run with a click of a button under emulation. Software/Games that once I could only read about in the Acorn magazines of the time I’m now able to run. For some reason especially with the BBC Micro it was hard to get any software where we lived and we had to make do with programming some of it ourselves or get it by typing in from magazine listings. The latter leading me many years later to remake Dickie Brickie. Back in the day it was a lot of work to type it in, but when we ran it we finally got a glimpse what the machine was capable of with the sprites, sound and animations on display.

What is your favourite feature/killer program in RISC OS?
StrongED & StrongHelp, BBC Basic, Netsurf, ADFFS, ArcEm, BeebIt, InfoZip, AMCOG Dev Kit

What would you most like to see in RISC OS in the future?
Just ongoing developments in general like RISC OS Open is doing with some of the foundations of the system.

Favourite (vaguely RISC OS-releated) moan?
Things can always be better of course, but sometimes I’m just amazed that RISC OS is still around and actively used and developed for. For what I want to do with RISC OS currently – mainly programming – and the fact that I’m still (re-)discovering/learning things I don’t have any complaints

Can you tell us about what you are working on in the RISC OS market at the moment?
I have been working on a remake of a bbc micro game Dickie Brickie. I started remaking it using my own code, but when I learned about the AMCOG Dev Kit I switched over and rewrote most of the game. There is a really nice article on the game at the Riscository site.

Any surprises you can't or dates to tease us with?
I’m investigating a next game to program. I quite like the idea of making a platform game, but I’ve some learning to do on how to do that so it could be a while.

Apart from iconbar (obviously) what are your favourite websites?
Riscository, RISC OS Open (Forums), RISCOS Blog, DragDrop, Stardot (Forums) and some recently discovered websites on programming and game development.

What are your interests beyond RISC OS?
Programming and IT in general.

If someone hired you for a month to develop RISC OS software, what would you create?
That’s a tough question… perhaps some updates to Paint.

Any future plans or ideas you can share with us?
I would like to investigate the use of the DDE and C language.

What would you most like Father Christmas to bring you as a present?
Nothing very special comes to mind. But it would be nice if JASPP would be allowed to disctribute some more games and/or games from the past (e.g. 4th Dimension) would be more easily available.

Any questions we forgot to ask you?
No. Thank you very much for the interview!

No comments in forum

Categories: RISC OS

Storage Sense on Windows 10 configured with Intune

Aaron Parker's stealthpuppy - Sun, 09/02/2018 - 10:46

In a modern management scenario, enabling end-points to perform automatic maintenance tasks will reduce TCO by avoiding scenarios that might result in support calls. Storage Sense in Windows 10 is a great way to manage free disk space on PCs by clearing caches, temporary files, old downloads, Windows Update cleanup, previous Windows Versions, and more, but it it’s not fully enabled by default. Storage Sense can potentially remove gigabytes of data, freeing up valuable space on smaller drives.

Here’s how to enable this feature on Windows 10 PCs enrolled in Microsoft Intune.

Storage Sense Settings

Storage Sense can be found in the Windows 10 Settings app and has only a few settings that can be changed. Typically a user may enable Storage Sense and accept the default settings and for most PCs, the defaults are likely good enough. Here’s what’s available in Windows 10 1803:

Enabling Storage Sense in Windows 10 Settings

Settings are stored in the user profile at:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\StorageSense\Parameters\StoragePolicy

 Settings are stored somewhat cryptically with numbers representing various options.

Storage Sense settings in the Registry

These values translate to following options and values in the table below:

SettingRegistry ValueOptionRegistry Data Storage Sense01Off0 On1 Run Storage Sense2048Every Day1 Every Week7 Every Month30 When Windows decides0 Delete temporary files that my apps aren't using04Selected0 Not selected1 Delete files in my recycle bin if they have been there for over08Off0 On1 256Never0 1 day1 14 days14 30 days30 60 days60 Delete files in my Downloads folder if they have been there for over32Off0 On1 512Never0 1 day1 14 days14 30 days30 60 days60

Now that we know what the options are, we can decide on what to deploy and deliver them to enrolled end-points.

Configure via PowerShell

Using the values from the table above, a PowerShell script can be deployed via Intune to configure our desired settings. The script below will enable Storage Sense along with several settings to regularly remove outdated or temporary files.

# Enable Storage Sense # Ensure the StorageSense key exists $key = "HKCU:\SOFTWARE\Microsoft\Windows\CurrentVersion\StorageSense" If (!(Test-Path "$key")) { New-Item -Path "$key" | Out-Null } If (!(Test-Path "$key\Parameters")) { New-Item -Path "$key\Parameters" | Out-Null } If (!(Test-Path "$key\Parameters\StoragePolicy")) { New-Item -Path "$key\Parameters\StoragePolicy" | Out-Null } # Set Storage Sense settings # Enable Storage Sense Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "01" -Type DWord -Value 1 # Set 'Run Storage Sense' to Every Week Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "2048" -Type DWord -Value 7 # Enable 'Delete temporary files that my apps aren't using' Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "04" -Type DWord -Value 1 # Set 'Delete files in my recycle bin if they have been there for over' to 14 days Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "08" -Type DWord -Value 1 Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "256" -Type DWord -Value 14 # Set 'Delete files in my Downloads folder if they have been there for over' to 60 days Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "32" -Type DWord -Value 1 Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "512" -Type DWord -Value 60 # Set value that Storage Sense has already notified the user Set-ItemProperty -Path "$key\Parameters\StoragePolicy" -Name "StoragePoliciesNotified" -Type DWord -Value 1

Modify the script as desired – at the very least the script should enable Storage Sense and leave the remaining settings as default. Save the script as a PowerShell file and deploy via the Intune console in the Azure portal. Ensure that the script runs with the logged on user’s credentials because it will write to HKCU.

Enabling Storage Sense with a PowerShell script in Intune

Assign the script to All Users and their PC will receive the script. It’s important to note that, because the settings are stored in HKCU and are not policies, the user can either disable Storage Sense or change other settings.

Wrapping Up

Storage Sense is a great feature to enable on Windows 10 PCs for both personal and corporate PCs. In a modern management scenario, it’s another tool in our kit for enabling end-points to be self-sufficient, so I highly recommend testing and enabling the feature by default. This article has shown you how to configure Storage Sense via Intune and PowerShell with all of the possible combinations required to configure it to suit your requirements.

Hold On…

Storage Sense shows you how much disk capacity has been cleaned in the previous month in the Settings app. For a bit of a laugh, you can modify the value where this is stored so that Settings displays spaced saved that’s clearly not genuine.

Messing around with the value of saved space

You’ll find the registry value (20180901) in this key:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\StorageSense\Parameters\StoragePolicy\SpaceHistory

Image Credit: Photo by Florian Pérennès on Unsplash

This article by Aaron Parker, Storage Sense on Windows 10 configured with Intune appeared first on Aaron Parker.

Categories: Community, Virtualisation

Acorn World at Cambridge computer museum, 8-9th Sept 2018

The Iconbar - Sat, 09/01/2018 - 01:55
Acorn World 2018
Sat 8th & Sun 9th September, 10am-5pm
@ The Centre for Computing History, Cambridge
http://www.computinghistory.org.uk/det/43277/Acorn-World-Exhibition-8th-9th-September-2018/

The Acorn & BBC User Group in association with the Centre for Computing History, Cambridge’s premier computer museum, are pleased to announce Acorn World 2018.

This exhibition will feature machines and software from across Acorn’s history and beyond, showing how they started, the innovative systems produced along the way, and the legacy of successful technology they left behind.

There will be a range of Acorn-era computers on display – and in many cases running for visitors to try out for themselves – covering everything from the System 1, through to the iconic RiscPC – which many recognise as the the pinnacle of Acorn’s computer designs – and beyond, including the never-released Phoebe, and a number of rare prototypes. The vintage displays will also include classic magazines, sure to set those nostalgic flames burning, and software which enthralled, entertained, and educated many users – and even inspired some to go into programming themselves.
Some of those classic computers have been given a new lease of life by enthusiastic users, with modern add-ons and other clever innovations – and there will be a number of these on display as well.

The exhibition doesn’t just stop at machines that came directly from the Acorn stable, though – there will also be post-Acorn systems, including the ultra-cheap Raspberry Pi and at the other end of the scale, the ‘slightly pricier’ Titanium – both of which are themselves children of Cambridge.

Tickets are only £8 for adults, £7 for over 60s, and £6 for children. This includes access to all the museum’s exhibits featuring mainframe, mini, home computers and games consoles from the past 50 years, plus the Guinness World Record holding MegaProcessor. This is a fund raising event for the museum to help continue their important work preserving and archiving computing history.

The Centre for Computing History, Rene Court, Coldhams Rd, Cambridge, CB1 3EW
http://www.computinghistory.org.uk/

No comments in forum

Categories: RISC OS

August News round-up

The Iconbar - Fri, 08/31/2018 - 07:11
Some things we noticed this month. What did you see?

DDE28c update from ROOL.

Prophet Version 3.94 and Font Directory Pro 3.23 now available from Elesar

Orpheus Internet launches a crowdfunding campaign to finance the upgrading of their services. Latest total

It is games month on RISC OS blog!

New 32bit version of Dickie Brickie now on !Store for free.

R-Comp SP12a brings DualMonitor version 5 and lots of RISC OS 5.24 software updates to TiMachine.

The ROOL TCP/IP "phase 1" bounty reaches a major milestone with a beta release of the updated AcornSSL module, supporting the modern TLS protocol instead of the old and insecure SSL protocol.

André Timmermans releases DigitalCD 3.11 and KinoAmp 0.48. The new version of KinoAmp is able to use hardware overlays for improved playback performance on machines and OS versions which support that functionality.

ADFFS 2.68 released. ROOL have also updated their website

IconBar will be running regular articles over the Autumn after a bit of a summer break. We kick off next friday with an interview....

No comments in forum

Categories: RISC OS

New - NetScaler Gateway (Feature Phase) 12.1 Build 49.23

Netscaler Gateway downloads - Thu, 08/30/2018 - 21:00
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

New - Citrix Gateway (Feature Phase) 12.1 Build 49.23

Netscaler Gateway downloads - Thu, 08/30/2018 - 18:30
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

New - Components for NetScaler Gateway 12.1

Netscaler Gateway downloads - Thu, 08/30/2018 - 18:30
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

New - NetScaler Gateway (Feature Phase) Plug-ins and Clients for Build 12.1-49.23

Netscaler Gateway downloads - Thu, 08/30/2018 - 18:30
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

Review of Additive Manufacture and Generative Design for PLM/Design at Develop 3D Live 2018

Rachel Berrys Virtually Visual blog - Wed, 05/16/2018 - 13:54

A couple of months ago, back at D3DLive! I had the pleasure of chairing the Additive Manufacturing (AM) track. This event in my opinion alongside a few others e.g. Siggraph and COFES is one of the key technology and futures events for the CAD/Graphics ecosystem. This event is also free thanks in part to major sponsors HP, Intel, AMD and Dell sponsorship.

A few years ago, at such events the 3D-printing offerings were interesting, quirky but not really mainstream manufacturing or CAD. There were 3D-printing vendors and a few niche consultancies, but it certainly wasn’t technology making keynotes or mentioned by the CAD/design software giants. This year saw the second session of the day on the keynote stage (video here) featuring a generative design demo from Bradley Rothenberg of nTopology.

With a full track dedicated to Additive Manufacture(AM) this year including the large mainstream CAD software vendors such as Dassault, Siemens PLM and Autodesk this technology really has hit the mainstream. The track was well attended with approximately half of the attendees when poled where actually involved in implementing additive manufacture and a significant proportion using it in production.

There was in general a significant overlap between many of the sessions, this technology has now become so mainstream that rather than seeing new concepts we are seeing like mainstream CAD more of an emphasis on specific product implementations and GUIs.

The morning session was kicked off by Sophie Jones, General Manager of Added Scientific a specialist consultancy with strong academic research links who investigate future technologies. This really was futures stuff rather than the mainstream covering 3D-printing of tailored pharmaceuticals and healthcare electronics.

Kieron Salter from KWSP then talked about some of their user case studies, as a specialist consultancy they’ve been needed by some customers to bridge the gaps in understanding. In particular, some of their work in the Motorsports sector was particularly interesting as cutting-edge novel automotive design.

Jesse Blankenship from Frustum gave a nice overview of their products and their integration into Solid Edge, Siemens NX and Onshape but he also showed the developer tools and GUIs that other CAD vendors and third-parties can use to integrate generative design technologies. In the world of CAD components, Frustum look well-placed to become a key component vendor.

Andy Roberts from Desktop Metal gave a rather beautiful demonstration walking through the generative design of a part, literally watching the iteration from a few constraints to an optimised part. This highlighted how different many of these parts can be compared to traditional techniques.

The afternoon’s schedule started with a bonus session that hadn’t made the printed schedule from Johannes Mann of Volume Graphics. It was a very insightful overview of the challenges in fidelity checking additive manufacturing and simulations on such parts (including some from Airbus).

Bradley Rothenberg of nTopology reappeared to elaborate on his keynote demo and covered some of the issues for quality control and simulation for generative design that CAM/CAE have solved for conventional manufacturing techniques.

Autodesk’s Andy Harris’ talk focused on how AM was enabling new genres of parts that simply aren’t feasible via other techniques. The complexity and quality of some of the resulting parts were impressive and often incredibly beautiful.

Dassault’s session was given by a last-minute speaker substitution of David Reid; I haven’t seen David talk before and he’s a great speaker. It was great to see a session led from the Simulia side of Dassault and how their AM technology integrates with their wider products. A case study on Airbus’ choice and usage of Simulia was particularly interesting as it covered how even the most safety critical, traditional big manufacturers are taking AM seriously and successfully integrating it into their complex PLM and regulatory frameworks.

The final session of the day was probably my personal favourite, Louise Geekie from Croft AM gave a brilliant talk on metal AM but what made it for me was her theme of understanding when you shouldn’t use AM and it’s limitations – basically just because you can… should you? This covered long term considerations on production volumes, compromises on material yield for surface quality, failure rates and costs of post-production finishing. Just because a part has been designed by engineering optimisation doesn’t mean an end user finds it aesthetically appealing – the case where a motorcycle manufacturer and indeed wants the front fork to “look” solid.

Overall my key takeaways were:

·       Just because you can doesn’t mean you should, choosing AM requires an understanding of the limitations and compromises and an overall plan if volume manufacture is an issue

·       The big CAD players are involved but there’s still work to be done to harden the surrounding frameworks in particular reliable simulation, search, fidelity testing.

·       How well the surrounding products and technologies handle the types of topologies and geometries GM throws out will be interesting. In particular it’ll be interesting to watch how Siemens Syncronous Technology and direct modellers cope, and the part search engines such as Siemens Geolus too.

·       Generative manufacture is computationally heavy and the quality of your CPU and GPU is worth thinking about.

Hardware OEMS and CPU/GPU Vendors taking CAD/PLM seriously

These new technologies are all hardware and computationally demanding compared to the modelling kernels of 20 years ago. AMD were showcasing and talking about all the pro-viz, rendering and cloud graphics technologies you’d expect but it was pleasing to see their product and solution teams and those from Dell, Intel, HP etc talking about computationally intensive technologies that benefit from GPU and CPU horse power such as CAE/FEA and of course generative design. It’s been noticeable in recent years in the increasing involvement and support from hardware OEMs and GPU vendors for end-user and ISV CAD/Design events and forums such as COFES, Siemens PLM Community and Dassault’s Community of Experts; which should hopefully bode well for future platform developments in hardware for CAD/Design.

Afterthoughts

A few weeks ago Al Dean from Develop3D wrote an article (bordering on a rant) about how poorly positioned a lot of the information around generative design (topology optimisation) and it’s link to additive manufacture is. I think many reading, simply thought – yes!

After reading it – I came to the conclusion that many think generative design and additive manufacture are inextricably linked. Whilst they can be used in conjunction there are vast numbers of use cases where the use of only one of the technologies is appropriate.

Generative design in my mind is computationally optimising a design to some physical constraints – it could be mass of material, or physical forces (stress/strain) and could include additional constraints – must have a connector like this in this area, must be this long or even must be tapered and constructed so it can be moulded (include appropriate tapers etc – so falls out the mold).

Additive manufacture is essentially 3-D printing, often metals. Adding material rather than the traditional machining mentality of CAD (Booleans often described as target and tool) – removing stuff from a block of metal by machining.

My feeling is generative design far greater potential for reducing costs and optimising parts for traditional manufacturing techniques e.g. 3/5-axis G-code like considerations, machining, injection molding than has been highlighted. Whilst AM as a prototyping workflow for those techniques is less mature than it could be as the focus has been on these weird and wonderful organic parts you couldn’t make before without AM/3-D Printing.

AWS and NICE DCV – a happy marriage! … resulting in a free protocol on AWS

Rachel Berrys Virtually Visual blog - Thu, 05/03/2018 - 13:12

It’s now two years since Amazon bought NICE and their DCV and EnginFrame products. NICE were very good at what they did. For a long time they were one of the few vendors who could offer a decent VDI solution that supported Linux VMs, with a history in HPC and Linux they truly understood virtualisation and compute as well as graphics. They’d also developed their own remoting protocol akin to Citrix’s ICA/HDX and it was one of the first to leverage GPUs for tasks like H.264 encode.

Because they did Linux VMs and neither Citrix nor VMware did, NICE were often a complementary partner rather than a competitor although with both Citrix and VMware adding Linux support that has shifted a little. AWS promised to leave NICE DCV products alone and have been true to that. However the fact Amazon now owns one of the best and experience protocol teams around has always raised the possibility they could do something a bit more interesting than most other clouds.

Just before Xmas in December 2017 without much fuss or publicity, Amazon announced that they’d throw NICE DVC in for free on AWS instances.

NICE DCV is a well-proven product with standalone customers and for many users offers an alternative to Citrix/VMware offerings; which raises the question why run VMware/Citrix on AWS if NICE will do?

There are also an awful lot of ISVs looking to offer cloud-based services and products including many with high graphical demands. To run these applications well in the cloud you need a decent protocol, some have developed their own which tend to be fairly basic H.264, others have bought in technology from the likes of Colorado Code Craft or Teradici’s standalone Cloud Access Software based around the PCoIP protocol. Throwing in a free protocol removes the need to license a third-party such as Teradici, which means the overall solution cost is cut but with no impact on the price AWS get for an instance. This could be a significant driver for ISVs and end-users to choose AWS above competitors.

Owning and controlling a protocol was a smart move on Amazon’s part, a key element of remoting and the performance of a cloud solution, it makes perfect sense to own one. Microsoft and hence Azure already have RDS/RDP under their control. Will we see moves from Google or Huawei in this area?

One niggle is that many users need not just a protocol but a broker, at the moment Teradici and many do not offer one themselves and users need to go to another third-party such as Leostream to get the functionality to spin-up and manage the VMs. Leostream have made a nice little niche supporting a wide range of protocols. It turns out that AWS are also offering a broker via the NICE EnginFrame technologies, this is however an additional paid for component but the single vendor offering may well appeal. It was really hard to find this out, I had to contact the AWS product managers for NICE to be certain. I really couldn’t work out what was available from the documentation and product overviews from AWS (in the end I had to contact the product management team directly).

Teradici do have a broker in-development, the details of which they discussed with Jack on brianmadden.com.

So, today there is the option of a free protocol and paid for broker (NICE+EngineFrame alibi tied to AWS) and soon there will be a paid protocol from Teradici with a broker thrown in, the protocol is already available on the AWS marketplace.

This is just one example of many where cloud providers can take functionality in-house and boost their appeal by cutting out VDI, broker or protocol vendors. For those niche protocol and broker vendors they will need to offer value through platform independence and any-ness (the ability to choose AWS, Azure, Google Cloud) against out of the box one-stop cloud giant offerings. Some will probably succeed but a few may well be squeezed. It may indeed push some to widen their offerings e.g. protocol vendors adding basic broker capabilities (as we are seeing with Teradici) or widening Linux support to match the strong NICE offering.

In particular broker vendor Leostream may be pushed, as other protocol vendors may well follow Teradici’s lead. However, analysts such as Gabe Knuth have reported for many years on Leostream’s ability to evolve and add value.

We’ve seen so many acquisitions in VDI/Cloud where a good small company gets consumed by a giant and eventually fails, the successful product dropped and the technologies never adopted by the mainstream business. AWS seem to have achieved the opposite with NICE, continuing to invest in a successful team and product whilst leeraging exactly what they do best. What a nice change! It’s also good to see a bit more innovation and competition in the protocol and broker space.

Open-sourced Virtualized GPU-sharing for KVM

Rachel Berrys Virtually Visual blog - Thu, 03/22/2018 - 12:05

About a month ago Jack Madden’s Friday EUC news-blast (worth signing-up for), highlighted a recent  announcement from AMD around open-sourcing their GPU drivers for hardware shared-GPU (MxGPU) on the open-source KVM hypervisor.

The actual announcement was made by Michael De Neffe on the AMD site, here.

KVM is an open source hypervisor, favoured by many in the Linux ecosystem and segments such as education. Some commercial hypervisors are built upon KVM adding certain features and commercial support such as Red Hat RHEL. Many large users including cloud giants such as Google, take the open source KVM and roll their own version.

There is a large open source KVM user base who are quite happy to self-support, including a large academic research community. Open-sourced drivers enable both vendors and others to innovate and develop specialist enhancements. KVM is also a very popular choice in the cloud OpenStack ecosystem.

As far as I know, this is the first open-sourced GPU sharing technology available to the open source KVM base. AMD’s hardware sales model also suits this community well with no software license of compulsory support; a model paralleling how CPUs/servers are purchased.

Shared GPU reduces the cost of providing graphics and suits the economies of scale and cost demanded in Cloud well. I imagine for the commercial and cloud based KVM hypervisors, ready access to drivers can only help accelerate and smooth their development on top of KVM.

The drivers are available to download here:

https://support.amd.com/en-us/download/workstation?os=KVM# . Currently there are only guest drivers for Windows OSs. However being open source, this opens up the possibility for a whole host of third-parties to develop variants for other platforms.

There is also an AMD community forum where you can ask more questions if this is a technology of interest to you and read the various stacks and applications other users are interested in.

Significant announcements for AR/VR for the CAD / AEC Industries

Rachel Berrys Virtually Visual blog - Fri, 03/09/2018 - 16:22
Why CAD should care about AR/VR?

VR (Virtual Reality) is all niche headsets and gaming? Or putting bunny ears on selfies… VR basically has a marketing problem. Looks cool but for many in enterprise it seems a niche technology to preview architectural buildings etc. In fact, the use cases are far wider if you get passed those big boxy headsets. AR (Augmented Reality) is essentially bits of VR on top of something see-through. There’s a nice overview video of the Microsoft Hololens from Leila Martine at Microsoft, including some good industrial case studies (towards the end of the video), here. Sublime have some really insightful examples too, such as a Crossrail project using AR for digital twin maintenance.

This week there have been some _really_ very significant announcements from two “gaming” engines, Unity and the Unreal Engine (UE) from Epic. The gaming engines themselves take data about models (which could be CAD/AEC models) together with lighting and material information and put it all together in a “game” which you can explore – or thinking of it another way they make a VR experience. Traditionally these technologies have been focused on gaming and film/media (VFX) industries. Whilst these games can be run with a VR headset, like true games they can be used on a big screen for collaborative views.

Getting CAD parts into gaming engines has been very fiddly:
  • The meshed formats in VFX industries are quite different from those generated in CAD.
  • Enterprise CAD/AEC user are also unfamiliar with the very complex VFX industry software used to generate lighting and materials.
  • CAD / AEC parts are frequently very large and with multiple design iterations so a large degree of automation is needed to fix them up repeatedly (or a lot of manual hard work)
  • Large engineering projects usually consist of thousands of CAD parts, in different formats from different suppliers

Many have focused on the Autodesk FBX ecosystem and 3DS Max, who with tools like their Slate materials editor allowed the materials/lighting information to be added to the CAD data.  This week both Unreal and Unity announced what amounts to end-to-end solutions for a CAD to VR pipeline.

Unreal Engine

Last year at Siggraph in July 2017, Epic announced Datasmith for 3DS Max with the inference of another 20 or so formats to follow (they were listed on the initial beta sign-up dropdown) including ESRI, Solidworks, Revit, Rhino, Catia, Autodesk, Siemens NX, Sketchup; the website today lists fewer but more explicitly, here. This basically promises the technology to get CAD data from multiple formats/sources into a form suitable for VFX.

This week they followed it up with the launch of a beta of Unreal Studio. Develop3D have a good overview of the announcement, here.  This reminds a lot of the slate editor in 3DS Max, and it looks sleek enough that your average CAD/AEC user could probably use without significant training (there are a lot of tutorial resources). With an advertised launch price of $49 per month it’s within the budget of your average small architectural firm and the per month billing makes it friendly to project based billing.

Epic are taking on a big task to deliver the end-to-end solution themselves, but they seem to know what they are doing. Watching their hiring website over the last six months they seem to have been hiring a large number of staff both in development (often in Canada) but also sales/business for these projects (hint: the roles often tagged with enterprise – so easy to spot). Over the last couple of years they’ve also built up a leadership team for these project including Marc Petit, Simon Jones and Christopher Murray and it’s worth reviewing the marketing material those folks are putting out.

Unity Announcement

On the same day as the UE announcement Unity countered with an announcement of providing a similar end-to-end solution via a partnership with PiXYZ, a small but specialist CAD toolkit provider.

Whilst the beta is not yet released, PiXYZ existing offerings look a very good and established technology match. Their website is remarkably high on detail of specific functionality and it looks good. PiXYZ Studio for example has all the mesh fix up tools you’d like for cleaning up CAD data for visualisation and VFX. PiXYZ Pipeline seems to cover all your import needs I’ve heard credible rumours that a lot of the CAD focused functionality is built on top of some of the most robust industry licensed toolkits, so the signs are positive that this will be a robust, mature solution rather fast. This partnership seems to place Unity in a position to match the Datasmith UE offering.

It’s less clear what Unity will provide on the materials / lighting front, but I imagine something like the Unreal Studio offering will be needed.

What did we learn from iRay and vRay in CAD

Regarding static rendering in VFX land: vRay, Renderman, Arnold and iRay compete, with iRay taking a fairly small share. However, via strong GPU, hardware and software vendor partnerships iRay has become the dominant choice in enterprise CAD (e.g. Solidworks Visualize etc). CAD loves to standardise and so it will be interesting if a similar battle of Unity vs Unreal will unfold with and eventual dominant force.

Licensing and vendor lock-in

This has all been enabled by the shift in licensing models of the gaming engines demonstrating they are serious about the enterprise space. For gaming a game manufacturer would pay a percentage such as 9% to use a gaming engine to create their game. This makes no sense in the enterprise space to integrate against a gaming engine which is a tiny additional feature on the overall CAD/PLM deployment. So, you will see lots of headlines about “Royalty Free” offerings, the revenues are in the products such as Datasmith and Studio. The degree to which both vendors rely on 3rd party toolkits and libraries under the hood e.g. CAD translators, the PiXYZ functionality etc will also dictate the profitability via how much Unreal or Unity have to pay in licensing costs.

These single vendor / ecosystem pipelines are attractive but relying on the gaming engine provider for the CAD import and materials could potentially lead to lock-in which always makes some customers nervous. Having done all the work of converting CAD data into something fit for rendering and VR I could see the attraction of being able to output it to iRay, Unity or Unreal, which of course is the opposite of what these products are.

Opportunities

There’s a large greenfield virgin market in CAD/AEC of customers who have very limited or no use of visualisation. Whilst the large AEC firms may have little pockets of specialist VFX, your average 10 man architecture firm doesn’t, like wise for the bulk of the Solidworks base. This technology looks simple enough for those users but I suspect uptake by SMBs may be slower than you might presume because for projects won on the lowest-bid why add a VR/AR/professional render component if Sketchup or similar is sufficient?

In enterprise CAD, AEC and GIS there are already VR users with bespoke solutions and strong specialist software offerings (often expensive) and it will be interesting to see the dynamics between these mass-market offerings and the established high-end vendors such as ESI.io or Optis.

These announcements are also setting Unity and Unreal up to start nibbling into the VFX, film and media ecosystems where specialist complex materials and lighting products are used. For many in AEC/CAD these products are a bit overkill. A lot of these users are likely to be less inclined to build their own materials and simply want libraries mapping the CAD materials (“this part is Steel”) to the VFX materials (“this is Steel and Steel should behave like this in response to light”). In the last month or so we’ve seen UE also move into traditional VFX territory with headlines such as “Visually Stunning Animated Feature ‘Allahyar and the Legend of Markhor’ is the First Produced Entirely in Unreal Engine” and Zafari – a new children’s cartoon TV series made using UE.

 

I haven’t seen any evidence of any integrations with the CAD materials ecosystems bridging that CAD materials (“this part is Steel”) to the VFX materials (“this is Steel and Steel should behave like this in response to light”) part of the solution. If this type of solution becomes mainstream it would be nice to see the material specialists (e.g. Granta Design) and CAD catalogues (e.g. Cadenas) carry information about how VFX type visualisation should be done based on the engineering material data. One to look out for.

 

Overall, I’m very interested about these announcements, lots of sound technology and use cases but whether the mass market is quite over the silly VR headset focus just yet…. we’ll soon find out J.

 

 

 

Looking at the Hyper-V Event Log (January 2018 edition)

Microsoft Virtualisation Blog - Tue, 01/23/2018 - 22:57

Hyper-V has changed over the last few years and so has our event log structure. With that in mind, here is an update of Ben’s original post in 2009 (“Looking at the Hyper-V Event Log”).

This post gives a short overview on the different Windows event log channels that Hyper-V uses. It can be used as a reference to better understand which event channels might be relevant for different purposes.

As a general guidance you should start with the Hyper-V-VMMS and Hyper-V-Worker event channels when analyzing a failure. For migration-related events it makes sense to look at the event logs both on the source and destination node.

Below are the current event log channels for Hyper-V. Using “Event Viewer” you can find them under “Applications and Services Logs”, “Microsoft”, “Windows”.
If you would like to collect events from these channels and consolidate them into a single file, we’ve published a HyperVLogs PowerShell module to help.

Event Channel Category Description Hyper-V-Compute Events from the Host Compute Service (HCS) are collected here. The HCS is a low-level management API. Hyper-V-Config This section is for anything that relates to virtual machine configuration files. If you have a missing or corrupt virtual machine configuration file – there will be entries here that tell you all about it. Hyper-V-Guest-Drivers Look at this section if you are experiencing issues with VM integration components. Hyper-V-High-Availability Hyper-V clustering-related events are collected in this section. Hyper-V-Hypervisor This section is used for hypervisor specific events. You will usually only need to look here if the hypervisor fails to start – then you can get detailed information here. Hyper-V-StorageVSP Events from the Storage Virtualization Service Provider. Typically you would look at these when you want to debug low-level storage operations for a virtual machine. Hyper-V-VID These are events form the Virtualization Infrastructure Driver. Look here if you experience issues with memory assignment, e.g. dynamic memory, or changing static memory while the VM is running. Hyper-V-VMMS Events from the virtual machine management service can be found here. When VMs are not starting properly, or VM migrations fail, this would be a good source to start investigating. Hyper-V-VmSwitch These channels contain events from the virtual network switches. Hyper-V-Worker This section contains events from the worker process that is used for the actual running of the virtual machine. You will see events related to startup and shutdown of the VM here. Hyper-V-Shared-VHDX Events specific to virtual hard disks that can be shared between several virtual machines. If you are using shared VHDs this event channel can provide more detail in case of a failure. Hyper-V-VMSP The VM security process (VMSP) is used to provide secured virtual devices like the virtual TPM module to the VM. Hyper-V-VfpExt Events form the Virtual Filtering Platform (VFP) which is part of the Software Defined Networking Stack. VHDMP Events from operations on virtual hard disk files (e.g. creation, merging) go here.

Please note: some of these only contain analytic/debug logs that need to be enabled separately and not all channels exist on Windows client. To enable the analytic/debug logs, you can use the HyperVLogs PowerShell module.

Alles Gute,

Lars

Categories: Microsoft, Virtualisation

A smaller Windows Server Core Container with better Application Compatibility

Microsoft Virtualisation Blog - Mon, 01/22/2018 - 19:04

In Windows Server Insider Preview Build 17074 released on Tuesday Jan 16, 2018, there are some exciting improvements to Windows Server containers that we’d like to share with you.  We’d love for you to test out the build, especially the Windows Server Core container image, and give us feedback!

Windows Server Core Container Base Image Size Reduced to 1.58GB!

You told us that the size of the Server Core container image affects your deployment times, takes too long to pull down and takes up too much space on your laptops and servers alike.  In our first Semi-Annual Channel release, Windows Server, version 1709, we made some great progress reducing the size by 60% and your excitement was noted.  We’ve continued to actively look for additional space savings while balancing application compatibility. It’s not easy but we are committed.

There are two main directions we looked at:

1)      Architecture optimization to reduce duplicate payloads

 We are always looking for way to optimize our architecture. In Windows Server, version 1709 along with the substantial reduction in Server Core container image, we also made some substantial reductions in the Nano Server container image (dropping it below 100MB).  In doing that work we identified that some of the same architecture could be leveraged with Server Core container. In partnership with other teams in Windows, we were able to implement changes in our build process to take advantage of those improvements.  The great part about this work is that you should not notice any differences in application compatibility or experiences other than a nice reduction in size and some performance improvements.

2)      Removing unused optional components

We looked at all the various roles, features and optional components available in Server Core and broke them down into a few buckets in terms of usage:  frequently in containers, rarely in containers, those that we don’t believe are being used and those that are not supported in containers.  We leveraged several data sources to help categorize this list. First, those of you that have telemetry enabled, thank you! That anonymized data is invaluable to these exercises. Second was publicly available dockerfiles/images and of course feedback from GitHub issues and forums.  Third, the roles or features that are not even supported in containers were easy to make a call and remove. Lastly, we also removed roles and features we do not see evidence of customers using.  We could do more in this space in the future but really need your feedback (telemetry is also very much appreciated) to help guide what can be removed or separated.

So, here are the numbers on Windows Server Core container size if you are curious:

  • 1.58GB, download size, 30% reduction from Windows Server, version 1709
  • 3.61GB, on disk size, 20% reduction from Windows Server, version 1709

MSMQ now installs in a Windows Server Core container

MSMQ has been one of the top asks we heard from you, and ranks very high on Windows Server User Voice here. In this release, we were able to partner with our Kernel team and make the change which was not trivial. We are happy to announce now it installs! And passed our in-house Application Compatibility test. Woohoo!

However, there are many different use cases and ways customers have used MSMQ. So please do try it out and let us know if it indeed works for you.

A Few Other Key App Compatibility Bug Fixes:

  • We fixed the issue reported on GitHub that services running in containers do not receive shutdown notification.

https://github.com/moby/moby/issues/25982

  • We fixed this issue reported on GitHub and User Voice related to BitLocker and FDVDenyWriteAccess policy: Users were not able to run basic Docker commands like Docker Pull.

https://github.com/Microsoft/Virtualization-Documentation/issues/530

https://github.com/Microsoft/Virtualization-Documentation/issues/355

https://windowsserver.uservoice.com/forums/304624-containers/suggestions/18544312-fix-docker-load-pull-build-issue-when-bitlocker-is

  • We fixed a few issues reported on GitHub related to mounting directories between hosts and containers.

https://github.com/moby/moby/issues/30556

https://github.com/git-for-windows/git/issues/1007

We are so excited and proud of what we have done so far to listen to your voice, continuously optimize Server Core container size and performance, and fix top application compatibility issues to make your Windows Container experience better and meet your business needs better. We love hearing how you are using Windows containers, and we know there is still plenty of opportunities ahead of us to make them even faster and better. Fun journey ahead of us!

Thank you.

Weijuan

Categories: Microsoft, Virtualisation

Pages

Subscribe to Spellings.net aggregator