picnicerror.net https://picnicerror.net BI, data, MSSQL, AWS and other such stuff Fri, 16 Nov 2018 13:45:10 +0000 en-GB hourly 1 https://wordpress.org/?v=5.0.1 24407933 Amazon Redshift now supports Elastic resize https://picnicerror.net/development/aws/amazon-redshift-now-supports-elastic-resize-2018-11-16/ https://picnicerror.net/development/aws/amazon-redshift-now-supports-elastic-resize-2018-11-16/#respond Fri, 16 Nov 2018 13:45:10 +0000 http://picnicerror.net/?p=1968 One of the major pain points for me with Amazon Redshift has always been the coupling between storage and compute.  Competitors like Snowflake and Google’s BigQuery offer independent compute and storage, which means easier (and quicker) scaling in times of increased load.  Redshift’s main drawback in the scalability sense has been that it can take up to 24 hours to resize your cluster (during which it’s in read-only mode), meaning there’s a lot of pressure to get your cluster configuration spot on before you go into production.  Redshift’s provision of elasticity is just not up to par with most of Amazon’s other services.  While Redshift Spectrum helps with this, it’s not a solution to the issue of scalability for an existing cluster.

In the lead up to re:Invent, Amazon last night dropped a load of really neat announcements (server-side encryption for DynamoDB as standard, SSE support for SNS), among which was the reveal of Elastic resize for Redshift.  As an aside, if this is the stuff they’re announcing now, there should be some really nice announcements at re:Invent.

How does resizing work?

Traditionally, when you identified a need to resize your Redshift Data Warehouse, you’d have to plan in some maintenance time to carry out the resize operation.  This can typically take anything between 1-24 hours, depending on your node type, volume of data, and other factors.

Under the “classic” model, Redshift switch your cluster into read-only mode and take a snapshot of your data.  It’ll then go away and provision an entirely new cluster that meets your new spec, and start loading all your data in from the snapshot.  Only once this load operation is complete, does Redshift point your cluster endpoints over to the new cluster and release its read-only hold.  The old cluster then gets destroyed.

As you can imagine, this is a time consuming and disruptive process.  Do you really want your Enterprise Data Warehouse to be unavailable for writes for up to a day?  While there are workarounds, such as provisioning a new cluster yourself and creating a pseudo-replication process, these are typically heavy on effort and cost.

Elastic resizing

As they often do, Amazon have recognised the pain point and worked to remedy it.  Elastic resizing (read more here: https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-redshift-elastic-resize/) massively improves the resizing process by turning it into a mainly online resize, and reduces the period of disruption from <24 hours, to only a few minutes.

If you want to understand how this works under the hood, I thoroughly recommend you watch Amazon’s online tech talk on the subject, which details how they’ve achieved Elastic resizing: https://pages.awscloud.com/Best-Practices-for-Scaling-Amazon-Redshift_1111-ABD_OD.html

Redshift now provides the option to choose between Elastic and Classic resize operations.

Redshift now provides the option to choose between Elastic and Classic resize operations.

At a high level, they’ve developed a way whereby some of the slices of your cluster can be transferred to new nodes in a transparent manner, minimising disruption, and allowing the cluster to remain read/write capable throughout the majority of the process.  There may be some minor disruption, including query cancellations etc. but I’ll take a few minutes over several hours any day.

There are some limitations of course:

  • You can only use Elastic resize to add/remove nodes, not change node type.
  • It only supports dc2 and ds2 node types.  Anyone still running a dc1 cluster will have to upgrade.  It’s worth doing this anyway for the free performance boost.
  • Single-node clusters aren’t supported (not that you’d be using single-node in production anyway).
  • It appears that you can only double or halve your cluster nodes.  I suspect this is related to the way slices are allocated on disc.  For example, if you’re running a 4-node ds2.xlarge cluster, you can Elastic resize to a 2-node or 8-node cluster.
  • There’s no sorting involved with an Elastic resize, so it can’t substitute for a vacuum operation, whereas a Classic resize can.

Summary

All-in-all, the introduction of the Elastic resize capability is a major plus for Redshift.  While it doesn’t remove the coupled storage/compute setup, it does remove a major barrier in cluster scaling, and even opens doors to being able to scale up/down according to demand – a use case that has just never been really practical on Redshift until now.

Has anyone tried out Elastic resize so far?  If so, let me know what you think of the capability and how this has impacted your business.

]]>
https://picnicerror.net/development/aws/amazon-redshift-now-supports-elastic-resize-2018-11-16/feed/ 0 1968
Redshift Spectrum finally supports Enhanced VPC routing https://picnicerror.net/development/aws/redshift-spectrum-finally-supports-enhanced-vpc-routing-2018-10-24/ https://picnicerror.net/development/aws/redshift-spectrum-finally-supports-enhanced-vpc-routing-2018-10-24/#respond Wed, 24 Oct 2018 10:37:41 +0000 http://picnicerror.net/?p=1946 What seems like an age ago, I spotted a setting on one of our Redshift clusters that suggested Enhanced VPC routing support for Redshift Spectrum might be on the way.  After waiting a while, and waiting some more, and then waiting some more, it seems that Amazon have finally released this into the wild, and Redshift Spectrum now works with clusters that have Enhanced VPC routing available!

As of Build 1.0.4349 or Build 1.0.4515, this functionality will be available in Redshift.  It hasn’t made it into the official announcements yet, but it has popped up on the Redshift forums here: https://forums.aws.amazon.com/ann.jspa?annID=6197

My previous post covers the details about Redshift Spectrum, and Enhanced VPC Routing, and what they are, so check that out if you’re unsure why this is big news: https://picnicerror.net/development/aws/is-aws-about-to-enable-redshift-spectrum-with-enhanced-vpc-routing-2018-01-18/

Has anyone out there tried this yet?  If so, let me know if the comments below how you’ve been getting on.

]]>
https://picnicerror.net/development/aws/redshift-spectrum-finally-supports-enhanced-vpc-routing-2018-10-24/feed/ 0 1946
AWS releasing in-browser Query Editor for Redshift https://picnicerror.net/development/aws/aws-releasing-in-browser-query-editor-for-redshift-2018-09-26/ https://picnicerror.net/development/aws/aws-releasing-in-browser-query-editor-for-redshift-2018-09-26/#respond Wed, 26 Sep 2018 14:41:24 +0000 http://picnicerror.net/?p=1926 One of the things that I really like about Google BigQuery is the ability to write queries right there in the web browser without having to install a hefty IDE.  Sure, there are times when having the full power of something like JetBrains DataGrip comes in handy (source control integration, customisation, formatting), but sometimes you just want to dive in and write a quick query without any messing around.  Amazon did this for Athena, which was really handy, but strangely never did so for Redshift…until now!

There doesn’t seem to be any PR on this yet, so I’m assuming it’s a brand new feature, but log in to your Redshift console, and if you’re in a supported region (and/or account, perhaps), you might see a couple of additions to the left-hand menu.

in-browser query editor for Redshift

Amazon appear to be adding an in-browser Redshift Query Editor

Interestingly, I can’t currently get past this modal dialog, which states:

You can only query Amazon Redshift clusters that are dc1.8xlarge, dc2.large, dc2.8xlarge, or ds2.8xlarge node types. If you don’t have an available cluster, you can launch a new cluster from the Redshift Dashboard.

A couple of observations about this:

  1. I’m running some dc2.large clusters, so a little strange that I’m getting this message.  I guess it does say that it’s a beta feature, so some weirdness expected.
  2. No ds2.xlarge mentioned.  Why?  Could it be (and I’m really stretching here), that Amazon have plans to deprecate the ds2.xlarge node type?  I can’t really think of any valid reason why they would support all the other available node types (including the superseded dc1.8xlarge), unless they’re planning on removing/replacing it in the near future.

It also looks like there’ll be a way to save your previous queries (just like BigQuery, Athena, Hive etc.) so that you can re-run them at any time.

I’ll add more detail as things start to work and/or Amazon release more information, but for the meantime, has anyone else seen this pop up in their Redshift console?  Are you able to use the Query Editor?  If so, I’d love to hear your first impressions in the comments below.

]]>
https://picnicerror.net/development/aws/aws-releasing-in-browser-query-editor-for-redshift-2018-09-26/feed/ 0 1926
AWS Lambda can now be invoked directly from SQS https://picnicerror.net/development/aws/aws-lambda-can-now-be-invoked-directly-from-sqs-2018-06-28/ https://picnicerror.net/development/aws/aws-lambda-can-now-be-invoked-directly-from-sqs-2018-06-28/#respond Thu, 28 Jun 2018 22:31:58 +0000 http://picnicerror.net/?p=1889 While quietly perusing Twitter this evening, I happened to notice one from the official AWS account with a link to a blog post from Amazon tech hero Randall Hunt describing the newly available capability for AWS Lambda: SQS as an event source!

This is functionality that I, personally, have been wanting for a while now.  While Simple Notification Service (SNS) is absolutely brilliant for a fan-out architecture, and provides immense flexibility with a wide range of supported subscriber types, controlled, serverless polling of SQS wasn’t really a viable option.  While you *could* run a Lambda for a few minutes doing long-polling on SQS, and then terminate before exhausting the 5-minute execution duration cap, it really felt a bit dirty.  To properly implement a queue-polling architecture, you really had to deploy an application on EC2, which meant managing servers etc.  Not that there’s anything wrong with that of course, it just seemed like there was a big glaring hole in the Serverless model.

Native SQS to Lambda event integration though really patches this omission and then some.  Randall’s blog post explains it in full, but it seems like Amazon have implemented some really nice intelligent scaling mechanisms to adjust Lambda concurrency (up to a defined limit) in response to queue depth.  This should really help constrain costs and ensure consistent throughput regardless of spiky traffic.

I’ve not yet had a chance to observe this in the wild though, so best tested for your workload before betting the farm on it, but this looks like yet another long-awaited piece of functionality that Amazon have knocked out the park.  Have you explored SQS as an event source for Lambda yet?  Any observations or gotchas so far?  Let me know in the comments below!

]]>
https://picnicerror.net/development/aws/aws-lambda-can-now-be-invoked-directly-from-sqs-2018-06-28/feed/ 0 1889
SQS vs SNS for Lambda Dead Letter Queues https://picnicerror.net/development/aws/sqs-vs-sns-for-lambda-dead-letter-queues-2018-03-02/ https://picnicerror.net/development/aws/sqs-vs-sns-for-lambda-dead-letter-queues-2018-03-02/#comments Fri, 02 Mar 2018 17:35:04 +0000 http://picnicerror.net/?p=1822 Serverless computing and event-driven functions are what it’s all about at the moment.  But what happens when the event trigger fires, and your process then encounters an error?  How do you recover from this given the event has since passed and may never happen again?  This is a common question in AWS when working with their serverless, event-driven Lambda Functions.

Fortunately, AWS lets you define Dead Letter Queues for this very scenario.  This option allows you to designate either an SQS queue or SNS topic as a DLQ, meaning that when your Lambda function fails it will push the incoming event message (and some additional context) onto the specified resource.  If it’s SNS you can send out alerts or trigger other services (maybe even a retry of the same function – although watch out for infinite loops), or any combination of the above, given its fanout nature.  If it’s SQS you can persist the message and process it with another service.

So let’s look at both options in a little more detail.

SQS Dead Letter Queue

Using SQS as a Dead Letter Queue (DLQ) ensures that you have a durable store for failed events that can be monitored (allowing necessary services/individuals to be alerted) and picked up for resolution at your convenience.  This allows you to process failures in bulk, have a defined wait period before re-triggering the original event, or taking some other steps to resolution.

SQS gives you a durable dead letter queue that can be monitored and polled to collect failed events for re-processing or special attention.

The fact that you don’t reprocess the event straight away gives you a little more flexibility around when and how you deal with lambda failures.

Pros

  • Durability: process when you’re ready to deal with the issue, maybe in bulk.
  • Can keep messages for up to 14 days
  • Next to guaranteed delivery

Cons

  • Latency: not event-driven so must be polled.
  • Single-subscriber: Messages will be deleted after being consumed by a subcriber, so it assumes a single process will be taking action on failed messages.

 

SNS Dead Letter Queue

SNS or Simple Notification Service is a key part of AWS’s event-driven offering, letting you process events almost instantaneously and fan-out to multiple subscribers.  It’s a great way to integrate applications in a microservices architecture.  You can also use an SNS Topic as a Dead Letter Queue (DLQ).  This has the benefit of allowing you to instantly take action on failure, whether that be attempting to re-process the message, alert an individual/process, store the event message somewhere for follow up, or any combination/all of the above.

SNS provides an event-driven Dead Letter Queue, enabling you to take immediate action to retry, alert, and/or store the incoming event-message.

The key to the SNS approach is its flexibility in sending messages to multiple subscribers.  it allows you to take some action immediately, while also passing the message to other, more suitable systems where it can be picked up and processed.

Pros

  • Event-driven: An SNS DLQ will trigger actions instantly upon receiving a message.
  • Fan-out: Configuring multiple subscribers allows multiple actions to be taken by different subscribers at the same time.

Cons

  • Non-Durable: SNS doesn’t keep messages for more than an hour.

Best of Both Worlds

A pattern that works rather well, and offers the best of both worlds, is to combine both SNS and SQS as in the diagram above.  By defining an SNS Topic as the DLQ, and having an SQS subscriber attached to the SNS Topic, you can have your durable store in the SQS queue, while also taking instant action.  The only caveat is that if you are re-attempting to process the message and this time it succeeds, you need some way to tell SQS so that you can remove the message from the queue.

Not perfect by any stretch, but it gives a little of the benefit of both.

Summary

There are a huge number of different patterns (and anti-patterns) out there for implementing SQS and SNS, as well as Lamba and event-driven patterns in general.  The two above are just a basic representation that work well in certain scenarios.  I’d be really interested to hear from other people who have worked with serverless/event-driven on AWS and what your opinions are, as well as any patterns you’ve found to be a good way of managing DLQs.

Please leave your comments thoughts below!

 

]]>
https://picnicerror.net/development/aws/sqs-vs-sns-for-lambda-dead-letter-queues-2018-03-02/feed/ 3 1822
Is AWS about to enable Redshift Spectrum with Enhanced VPC Routing? https://picnicerror.net/development/aws/is-aws-about-to-enable-redshift-spectrum-with-enhanced-vpc-routing-2018-01-18/ https://picnicerror.net/development/aws/is-aws-about-to-enable-redshift-spectrum-with-enhanced-vpc-routing-2018-01-18/#comments Thu, 18 Jan 2018 17:46:54 +0000 http://picnicerror.net/?p=1795 AWS is knocking it out of the park at the moment with loads of new services and features coming out every week.  Indeed, it can be hard to keep up with the degree of change.  But, while working on one of our Redshift clusters today we spotted a potential scoop that would remove a key blocker for one extremely useful service, Redshift Spectrum.

Up until now it’s only been possible to use Spectrum if you don’t have Enhanced VPC Routing enabled on your Redshift cluster.  There are so many benefits to using Enhanced VPC Routing (reduced data transfer cost, control, security) that it’s hard to see why anyone wouldn’t be using it, especially if you move data between Redshift and S3 a lot.

But we spotted a new parameter being applied to one of our clusters when we made some maintenance changes to a parameter group.  There’s now a parameter named spectrum_enable_enhanced_vpc_routing showing, which hints that Amazon may be about to remove this crucial limitation.

 

 What is Redshift Spectrum?

Redshift Spectrum is a seriously cool name for what is essentially fluid extra horsepower for your Redshift cluster.  One of the things commonly cited as a drawback for Redshift is the fact that storage is coupled with compute: there’s no way to scale up to more computing power without also scaling storage (and paying for it).  Enter Spectrum.

Redshift Spectrum is an extension to Redshift that allows AWS users to use on-demand Redshift capability to instantly scale compute power in order to query data that is held in S3.  This works by defining external tables in Redshift.  These external tables are essentially metadata telling Redshift that the files in a specific S3 location are structured in a particular way, so that when a user issues a query against the external table, the Redshift query optimiser knows what the data is, and what it looks like.

When you query this external table, Redshift calculates the estimated data volumes, and computing power needed, and allocates some compute resources from a central pool in order to service your query.  This all happens transparently, and ensures that you are temporarily allocated the necessary compute power to process your query in a reasonable timeframe.

Crucially, this answers the compute vs storage complaint and gives Redshift a similar capability to Google’s BigQuery, which had previously been missing.

I’ll delve into Spectrum in more detail in another post, but for now let’s get back to the matter at hand.  In the meantime, why not check out Amazon’s docs on Redshift Spectrum?

 

What is Enhanced VPC Routing?

In AWS you can configure VPCs (Virtual Private Clouds) which allow you to segregate and group resources and control security, data transfer, and all sorts of other things for all manner of reasons.  Crucially though, some centralised AWS services, most importantly S3 (Simple Storage Service) which is the backbone of AWS, live outside your VPCs.  Amazon don’t charge you to put data into AWS (why would they?) but they do charge you to take data out, or to move it around between regions and VPCs.  It also means that traffic between your VPC and S3 has to go over the big bad Internet.

So this becomes important when you have data moving from “VPC-less” (at least in basic terms) services such as S3, and your resources that you’ve configured within a VPC, for example Redshift.  Fortunately, AWS offers Enhanced VPC Routing, which allows you to route traffic between S3 and Redshift through your VPC, meaning you can control all kinds of aspects of this data movement such as DNS, security groups, ACLs, traffic monitoring and loads more.  The advantages are obvious.

Again, I may touch on this in another post so I’ll leave it here for now.  Amazon’s docs on Enhanced VPC Routing and Redshift.

 

Redshift Spectrum and Enhanced VPC Routing

Tucked away in the Spectrum small print, is a line that states “Your cluster can’t have Enhanced VPC Routing enabled.”  This is a major blocker for anyone wanting to use Spectrum with an in-VPC Redshift cluster as it would mean either a new cluster would be required, or turning off Enhanced VPC Routing.

Fortunately, the newly appeared spectrum_enable_enhanced_vpc_routing parameter suggests that this may be about to change.  I’ve not seen anything from Amazon yet to confirm this, but watch this space!

A "spectrum_enable_enhanced_vpc_routing" parameter has appeared in Redshift.

The parameter “spectrum_enable_enhanced_vpc_routing” has suddenly appeared on the Redshift console, hinting that Spectrum may be about to remove a major restriction.

Let me know in the comments below if you’ve seen any more on the topic, or any official comms from AWS.

]]>
https://picnicerror.net/development/aws/is-aws-about-to-enable-redshift-spectrum-with-enhanced-vpc-routing-2018-01-18/feed/ 1 1795
Redshift connectivity officially announced for Power BI Service https://picnicerror.net/data-and-analysis/redshift-connectivity-officially-announced-for-power-bi-service-2017-03-10/ https://picnicerror.net/data-and-analysis/redshift-connectivity-officially-announced-for-power-bi-service-2017-03-10/#respond Fri, 10 Mar 2017 17:42:45 +0000 http://picnicerror.net/?p=1671 Last year, Microsoft added a preview connector enabling Power BI to query Amazon Redshift.  This wasn’t publicised as an “official” data source, and took some steps in order to be able to even see the connector in Power BI Desktop.  Crucially, you could only use this connector in Power BI Desktop, not when workbooks are deployed to the cloud.  Yesterday, Microsoft announced the connector is now available within the Power BI Service, which means that workbooks containing Redshift data connections can now be deployed to the cloud.  I’ve been working a lot with Redshift over the past year or so, and Power BI’s still my go-to data-viz solution, so I’m delighted to see the this announcement, as it means that Redshift-based workbooks can now be shared with others via powerbi.com.

You can read details of the announcement of Redshift for the Power BI Service over on the Power BI blog, I’m not going to replicate it here.

Working with Redshift data in Power BI

As with most Database-type data sources, Power BI offers two query modes: Import or Direct Query.  Import mode allows you to select a number of tables and views from the data source, and then loads all the data from these into Power BI.  That’s fine in a lot of data sources, but when you’re dealing with potentially billions of rows of data like you normally would be in Redshift (or other big data solutions like Google BigQuery, Spark, Snowflake, etc.), this isn’t really an option.  You’re paying for the processing power these solutions offer, so use it.  DirectQuery mode pushes the execution down onto the database.  It allows something like Redshift to use the power of the cluster to execute the query and return the results to the client, in this case Power BI.  This is a very common model found with client tools that support big data repositories.  Tableau, Qlik, Alteryx etc. all support a similar practice under various names.  These queries are issued in real-time, as a user filters and interacts with the visualisation.  There are some limitations to this approach, as outlined here.

The configuration on powerbi.com is still a little involved, and there aren’t direct connectors set up as of yet, but it’s great to see Microsoft weaving Redshift support deeper into Power BI.  Watch this space!

If you’ve used the Redshift connector for Power BI (or any of the other experimental connectors like Impala or Snowflake), let me know in the comments below how your experience has been and what your thoughts are.

 

]]>
https://picnicerror.net/data-and-analysis/redshift-connectivity-officially-announced-for-power-bi-service-2017-03-10/feed/ 0 1671
Amazon Quicksight now in General Availability! https://picnicerror.net/data-and-analysis/amazon-quicksight-now-in-general-availability-2016-11-16/ https://picnicerror.net/data-and-analysis/amazon-quicksight-now-in-general-availability-2016-11-16/#respond Wed, 16 Nov 2016 08:41:45 +0000 http://picnicerror.net/?p=1605 Late last night, Amazon announced that their proprietary AWS data visualisation tool, Quicksight was now generally available in the US and Ireland.  Quicksight aims to be a PowerBI-esque drag and drop visualisation tool, that allows you to access your data from AWS (and other sources) in seconds, regardless of scale.  I’ve had a very quick go this morning, and visualised some data from a modest 1TB Redshift cluster after just a few minutes.  The biggest challenge was finding out the correct IP range for Quicksight to enable access to my VPC (Thank you serverfault).

More to follow…In the meantime, try Quicksight for yourself here: https://quicksight.aws.amazon.com

]]>
https://picnicerror.net/data-and-analysis/amazon-quicksight-now-in-general-availability-2016-11-16/feed/ 0 1605
5 Observations from Microsoft Build 2016 https://picnicerror.net/web-technology/5-observations-from-microsoft-build-2016-2016-04-20/ https://picnicerror.net/web-technology/5-observations-from-microsoft-build-2016-2016-04-20/#respond Wed, 20 Apr 2016 17:03:30 +0000 http://picnicerror.net/?p=1465 Microsoft’s Build conference for 2016 took place a couple of weeks ago, and true to form, there were a number of killer announcements and reveals for a number of services, tools, and frameworks, many of which are available today.  Not one to ever really post something when it’s actually relevant, here are a few of the things that jumped out at me from the event.

Natural Language Processing is at the core of Microsoft’s future

Microsoft hasn’t made any secret of their work in the field of Natural Language Processing.  They released Q&A as one of the key features of Power BI, enabling users to query their data and generate visualisations using near to natural language.  Then Cortana came along, using the same NLP algorithms and knowledge-base to enable Windows Phone users to command their mobile with their voice.  Then, as the capabilities advanced, Cortana was introduced to Windows 10, and became a key part of the latest MS Operating system, all working from the same platform, and all that experience garnered throughout the last few years.

“Human language is the new UI” – Microsoft CEO Satya Nadella

During his keynote address, MS CEO Satya Nadella gave a key insight into the company’s direction: “Human language is the new UI,” he said, and voice-controlled “bots are the new apps.”  Microsoft’s vision is for users to interact with bots via natural language, who will then interpret the user commands and relay these to the computer.  “Clunky” web forms and cluttered interfaces will be replaced by a new, simpler way to interact with computers.

It’s a grand vision, and while we’re definitely a long way away from the talking computers of Star Trek, Microsoft has made some excellent strides in this direction, not least with the underlying platform behind Cortana, which has now been released to developers as the Language Understanding Intelligent Service (LUIS).  This is currently in beta (and free to use), and is definitely worth checking out if you haven’t already: https://www.luis.ai/.

 

Microsoft is responsible for the apocalypse

Speaking of bots, I’m sure everyone has already read about the exploits of Tay.  Microsoft’s Twitter-bot, built as a demo and test for their new Bot Framework quickly became famous as “she” learned from other Twitter users, and rapidly degenerated from an innocent, naive teenage girl, into a racist, abusive and downright maniacal representation of all that’s wrong with the human race.  Thank God they didn’t give her access to nuclear launch codes or it would be Skynet all over again!

Microsoft’s Twitter-bot Tay quickly lowered herself to the level of the worst Twitter trolls out there. Not the best advert for the company’s AI capabilities.

In saying that, Microsoft’s new Bot Framework is a very impressive product, and provides the brain to enable developers to create their own bots to easily integrate with lots of different services, such as Skype, email, Slack, and many more.  Taking the chilling vision of Tay’s future out of the equation, the availability of such a framework is an excellent idea, given the number of extremely poor “chat” programs out there.  And integration with other channels means developers can easily add smart chat functionality to anything they build, furthering the case for natural-language human-computer interaction.

 

Bash on Windows

Okay, so this one’s been making a lot of headlines.  There’s a fair bit of skepticism about how good it will be, but having Bash running on Windows 10 brings some really excellent developer utilities to Windows, which previously has been relying on .NET-based copies of core Linux utilities.  Couple this with the recent announcement about SQL Server being launched for Linux, and it’s clear that Microsoft these days is about getting people access to the best things, regardless of their platform of choice.

Power BI leads natively on iOS, SQL Server is coming to Linux, and now Bash on Windows.  Modern-day Microsoft is a fair way away from the MS of old.

 

Cross platform is really here.

The Universal Windows Platform (UWP) has had a lot of coverage lately, some good, some bad.  However, Microsoft is taking a really admirable approach in trying to save developers time (who wants to develop an app across every different form factor) and improve continuity of user experience across devices.  The UWP is a great idea, but the key issue has yet again been the availability of apps on the platform.

So, it’s welcome news that at Build this year, MS announced that they’re providing a mechanism for developers to port their Win32 and .NET-based apps to the UWP.  Many intrepid souls have already shown this working with old school PC games, and other custom apps, but it’s great news and opens up the worlds largest collection of apps (or applications, if you were around before the millennium) to the new portable Windows platform.

And in other great cross-platform news, Continuum, Microsoft’s “convert your Windows Phone to a PC-lite” feature, is adding support for the Xbox One controller, meaning that when you’re away from home you just need to pack your controller and your display adaptor to turn your phone into a portable console that can hook up to a hotel TV.  Sure, you’ll be limited to phone-based games, but stuff like Halo: Spartan Assault should work really well, assuming they add controller support.

 

Project Oxford goes live

And finally, one of (in my opinion), the most impressive projects out of Microsoft Research in recent years, Project Oxford, has graduated to a full release.  The collection of machine learning services offers easy-to-use APIs that allow you to provide image recognition, image-based emotional detection, and even identify a breed of dog via their web-servicified (not a word) machine learning models.  There are a huge number of amazing possibilities with these services, just check out the video below to see what one MS staffer created for his smart glasses:

Project Oxford has been released commercially under the name Cognitive Services, and is available here: https://www.microsoft.com/cognitive-services/en-us/apis.

 

Conclusion

So, in summary, a really exciting and interesting Build conference this year, with a strong focus on creating intelligent software that can really learn from user behaviour, and provide a better, and easier, user experience across all devices.  There are some really grand ideas being thrown about, and I’ll be keeping a close eye on things to see how they progress over the next year.  I really like where Microsoft’s vision is heading, the big question is whether they can deliver on all of the promise, or if they’ll fall short.

 

 

]]>
https://picnicerror.net/web-technology/5-observations-from-microsoft-build-2016-2016-04-20/feed/ 0 1465
Possible downtime – Hosting Migration https://picnicerror.net/site-news/possible-downtime-hosting-migration-2015-09-10/ https://picnicerror.net/site-news/possible-downtime-hosting-migration-2015-09-10/#respond Thu, 10 Sep 2015 13:56:11 +0000 http://picnicerror.net/?p=1433 Yet again I’ve let things slide and haven’t posted in a while.  This one’s nothing exciting.  I’m currently migrating to a new hosting provider, so any weirdness can be attributed to this.

Server migration in progress

I’m hoping to be back soon with some more up to date posts!

]]>
https://picnicerror.net/site-news/possible-downtime-hosting-migration-2015-09-10/feed/ 0 1433