OU blog

Personal Blogs

Christopher Douce

Using the cloud to understand the user experience

Visible to anyone in the world

In mid-July I went to an event at University College London that was all about interaction design and the user experience.This blog post is a quick summary of some of the key points that I took away from the event.

The theme for the evening was all about how to do remote user testing.User testing is a subject that is covered in the Open University M364 Fundamentals of Interaction design module. Interestingly, the evening also had connection with another module that I have a connection with: TM352 Web Mobile and Cloud.There were three talks during the evening.For the purposes of this blog, I'm just going to say something about two of them.

Remote user testing

The first talk of the evening was by a representative from a company called WhatUsersDo (company website).Here's a quick summary of the business: if you've got an interactive product and you need to test it with real users, you can contact this company who have a bank of on-line testers.These testers can then be videos and recorded using your products or interfaces.When the testing has been completed, analysts can review the data and send it back to you in a neat report.

In essence, you can get lots of qualitative data relatively easily.You also don't have go through the challenge and drama of recruiting participants and organising lab sessions.Lab sessions, it is argued, are expensive. Instead, remote testers can their laptops (and smart devices) which have embedded video cameras and microphones.

The thing is: how does a recording find its way from a research participant to a user experience analyst?The answer is simple: the cloud helps them do it.Apparently the WhatUsersDo infrastructure is undergoing continual change (which isn't too surprising, given the pace of change in computing).Apparently, the business uses Amazon EC2, or Elastic Compute Cloud (I think that's what it's an abbreviation for!)Other bits of interesting technology include the use of Angular.JS (Wikipedia) and MongoDB (Wikipedia).

SessionCam

SessionCam (company website) also helps users to do user testing, but adopts a somewhat different approach to WhatUsersDo.Rather than to ask users to talk through their use of a website (for instance), SessionCam actually records where the users look when the move throughout a website.

I was very curious about how this worked.The answers seemed to be pretty simple: through the use of 'magic tags' that were embedded in a web page.It also works through the magic of cookies.I also had another question, which was: if the system is tracking user 'movements', then where does all this data go to?The answer was also pretty simple: to the cloud.Like WhatUsersDo, SessionCam also makes use of Amazon cloud storage.

A really interesting aspect to all this, is that the company was able to gather and store information about thousands of user interactions.The company could then create what was known as 'heat maps'.These were rough pictures of where users go to on a website.

Reflections

This event has taught me two things: the first is the interesting ways that cloud technology can be used to create a niche business or service.Secondly: the unassailable fact that I need to always keep up with changes in software technology.

I've seen Angular mentioned on an increasing number of job adverts.A quick skim read about it mentions some bits of tech that I have used at various times: HTML, the DOM, Javascript and JSON - but I haven't used Angular in anger.In fact, I know hardly anything about it.The same applies to MongoDB: I know what it is, and I know what it does, but I have never found the time to mess about with it.This is something that I really ought to do!(And the same applies with the use of Python, and this might well become a subject of another blog post).

In some respects, these companies represent two mini case studies about the use of cloud technologies.A couple of months back, I went to a talk about a company that shared financial data 'through the cloud'.There are loads of other examples out there.

Permalink Add your comment
Share post
Christopher Douce

Using the cloud to get to the OU campus

Visible to anyone in the world
Edited by Christopher Douce, Thursday, 21 Nov 2019, 11:25

I get to visit the Open University campus in Milton Keynes with alarming regularity and getting there is always a bit of a trauma; I need to take three trains and then a shuttle bus.  After doing this journey for about two years, I’ve managed to get the timing down to a fine art, but sometimes things don’t go as smoothly as you hope:  I sometimes miss the shuttle bus and I have to catch a taxi. 

I can’t help the feeling that catching a taxi, on your own, is an extraordinary extravagance.  About a year ago, when my train was delayed, I got chatting to a fellow train traveller who was sat opposite me in the same carriage.  I noticed that he was rifling through some papers that contained the unmistakeable university logo.

‘Are you heading to Walton Hall?’ I asked.  It turned out that he was, and my fellow traveller, like me, was planning to catch a taxi to the campus.

‘Fancy sharing a cab?’ I asked.

I turned out that my fellow traveller, who I had never met before, had pretty much the same job as I had: he had the job title of staff tutor, but was based in Wales and worked in the Health and Social Care faculty.  He had travelled up to Milton Keynes after spending a night out with friends in London.  If I hadn’t nosily spotted his university papers, we would have incurred double the amount of expenses, and missed out on an opportunity for a nice chat.

Sharing lifts

Hundreds of people work at The Open University campus in Milton Keynes.  There are so many people travelling between the university and the train station that the university puts on a shuttle bus at peak hours - but what happens if you travel outside of peak hours?  The answer is you either catch a cab (which is costly), or you try to catch a local bus which takes ages and is pretty infrequent.

I don’t mind sharing taxi rides with colleagues.  The problem is that because there are so many of them that I don’t know who they are!  Even if I did know who they were, they might be in another carriage and have hailed and taxi and left the station forecourt before I had a chance to catch up with them.

One solution might be to loiter around the taxi rank and bellow: ‘is there anyone here who is going to THE OPEN UNIVERSITY? Does anyone WANT TO SHARE A CAB?!’ and see what happens.  The problem is that since I’m exceptionally English and doing things like this instils in me a morbid fear of being arrested.

Connections

When I was travelling to the campus one morning, an idle thought went through my mind.  I thought: ‘wouldn’t it be good if I could just take out my mobile phone, start an app, and push a button that says “I’m on the train from London to Milton Keynes – and I would be happy to share a lift to the campus if anyone is up for it…”’ 

This imaginary app would then tell me whether there was anyone else who was on the same train as me, or offer me an alert if anyone on the train would like to share a ride to my destination.  Another variation of this would be to try to find strangers to share journeys with, who might be going to roughly the same part of the city that you were travelling to.  To keep it simple, I thought, ‘no, that would just increase the complexity – let’s just think about this in terms of a single organisation’.

I imagined my app would be able to display the first name of fellow travellers, the faculty or department that they were in (which would be really useful in terms of facilitating a conversation), and also have a picture – so you know what a fellow traveller may look like when you get to the taxi rank.

There would be two obvious wins and one positive side effect.  The two wins were economic (it saves the university money), and environmental (less fuel is burnt to get to the campus).  The side effect is that you might be able to have some great chats, which might help you to keep up to date with what’s going on across the university.

Technical questions

So, how might we make this idea a reality?  Well, we need to figure out how to write an app.  Secondly, we need to figure out how to save data (so we can make a record of who is travelling on which train).  Thirdly, we need to get some data somehow (so we can get information about different trains).  Finally, we need to do a bit of ‘data crunching’ somewhere so we can be alerted as to whether there are other people on our train that we could share a lift with.

Creating an app

So, how do we go about creating an app?  The answer is: there are loads of different ways.  You can either create ‘native apps’ or you could create ‘web apps’ using HTML 5 and Javascript. 

When it comes to native apps, you might want to create an app for either an Android phone, or an iPhone.  If you’re thinking of developing an app for an iPhone, you might use xCode, which is a toolset from Apple (where there is a fancy new programming language called Swift). 

If you’re thinking of developing for an Android phone, you might consider using Android Studio, NetBeans (NBAndroid) or MIT AppInventor (I’m sure there are other tools out there!)  The problem, of course, is that some OU staff use Android phones, and some use iPhones.  To attempt to take the pain away from the nightmare of different platforms, there’s something called PhoneGap (but I don’t know too much about that… it’s all new to me!)

Storing data

Assuming that we build an app, then how do we store (and share) data?  This is where the cloud comes in.  The problem is that I don’t want to spend any money setting up services.  Plus, it’s been an absolute age since I’ve done any of that stuff.  Another solution is to make use of services from existing businesses that have already done all the hard work for you.

There are a quite a few different providers.  One of the biggest is Amazon.  Amazon offers a service that allows you to ‘plug into’ their existing computing and network infrastructure, allowing you to create and use your own virtual machines, which then can store data (since these virtual machines can host databases, like MySQL).  Rather than having to pay, host, and power a whole server (which, arguably, is likely to remain idle for quite a lot of time), you can instead pay for how much processor time, network capacity and data storage you consume.  It’s as if a server has become a utility.  Rather than having to worry about backups and whether you need to buy more processing power, this can all be looked after by a third party: you pay for what you use, allowing you to concentrate on the task of writing code and solving your problem.

Of course, you might not want to use Amazon.  If not, there are loads of other cloud data and service providers you might use.  Two of them who come to mind are that of Rackspace and Microsoft.  The interesting thing about Microsoft (and providers who are similar to them), is that you can choose where your virtual servers live.  If most of your customers are located in North America, it probably makes sense (in terms of network performance) to have your virtual servers served from that part of the world.  If more of your users are located in Europe (such as users who are travelling to and from Milton Keynes), you’re likely to want to host your virtual machines in data centres in Europe.

Another thought is: perhaps you don’t want to store your data in machines that are managed by Amazon or Microsoft.  If so, another approach could be to set up your own private cloud (providing you have your own infrastructure to do this, of course).  You might want to do this if your organisation has already invested quite a lot of capital into IT resources, or government or institutional policy dictates that you wish to make sure that your data is only within the remit of a particular jurisdiction.  Everything in life is always a compromise.  You might want to use your own private cloud as opposed to using a public cloud, but a private cloud is likely to cost in terms of hardware, power and administrative overheads.

If I were seriously writing this app, what would I do?  I would ask the IT people in The Open University to see if they have got a cloud system that I could use.  Whilst I wait for them to get back to me (which can sometimes take quite some time) I also might try to experiment and create a prototype using a public cloud provider, since some of them can give you ‘trial accounts’.

Getting data

Let’s say I’m going to a module meeting in Milton Keynes and I’m sat on the 8.46 train from London Euston.  There are two things I need to do: I need to say ‘I’m on this train’, which means storing a record so other people (meaning: other Open University colleagues) can see that I would be up for sharing a taxi, and also recording which train I was on.  The problem is: I don’t want to go through the trouble of entering ‘8.46 from Euston, London Midland’, since I’m lazy and I don’t have too much patience.  Plus, we need to iron out any ambiguity.

One way to solve this problem is see what trains are currently running (because, what happens if my train is delayed?)  Thankfully, Network Rail provides loads of data feeds (Network Rail), which we could use to choose the right train (and, I’m wondering whether we might be able to use the magic of GPS positioning too!)

As a brief aside, being a London resident, I’m a great fan of an app called CityMapper  (CityMapper website).  It gives you loads of information about different bus routes, trains, underground stations, and hooks up to Google Maps so you can see where you’re going.  An interesting question is: how does it work?  One answer lies with the availability of different data feeds, such as the data feeds that Transport for London provides (TfL data feed summary page).  

Why do TfL provide all this data?  The answer is: that TfL are in the business of providing transport, they’re not in the business of creating new apps for new-fangled computing devices: that’s not their core business.  Once feeds are made available, they can be used in new and creative ways.

Back to our ‘taxi ride sharing’ scenario: let’s say we’ve got a group of four people on the train that are prepared to share a ride to the campus, then what happens?  Who is going to make the call to the taxi company?  One thought is that the colleague who instigated a ‘taxi share request’ could do that job.  Or, alternatively, the taxi app might do it for us (if our chosen taxi company has some kind of mechanism to accept taxi bookings from recognised apps). 

Other issues

When we start to think about all this, we come up against questions of user identity and what it means in terms of a mobile device.  Some of my mobile apps make use of my Google+ account, whereas others make use of Facebook.  This can make things a whole lot easier, but it does worry me a little, mostly because I don’t know the extent to which my data is being used (we’re again back to the question of compromise: convenience traded off for ease of use).  One thing that we might want to do is to create our own ‘identity’ management system.

Another issue is that we’re beginning to step into is the currently fashionable area of smart cities (Wikipedia); the possibility that journeys through urban environments can be aided and abetted by the use and sharing of data.  (Not to mention costs and communications).

Final thoughts

Just as I was finishing this blog, the following email popped into my inbox: ‘would anyone like to share a taxi from MK station tomorrow?  I have a choice of trains, the first one arriving MK at 10:02, if that fits with anyone else’s plans?’

I remember this other time when I was travelling to the University of West of England from Paddington to Bristol Parkway train station.  When I got to Bristol Parkway, I had to catch a taxi from the station to the campus.  When I arrived on campus and started to walk where I needed to go, I recognised someone.  That someone was the gentleman who was sitting on the next row along from me.  Again, if only we had known, or had talked, or hadn’t been so English about our early morning journey, we could have shared a taxi together.

A final question is: given all the things that I’ve found out, am I going to go ahead and create this app?  Sadly, I’ve got a whole load of other deadlines to attend to, which means that I can’t afford to invest the time – but I would really like to, and I hope that someone does go ahead and creates it!  I, for one, would really like to give it a try.

In the meantime, what I have decided to do is to make more of an effort to chat to the people who catch the same shuttle bus as I do.  This way, if the train ever gets delayed, I might have more of a clue about who I could share a taxi with. 

Permalink Add your comment
Share post
Christopher Douce

Teaching and learning programming for mobile and tablet devices: London Metropolitan University

Visible to anyone in the world

On 24 July 2014, I went to a Higher Education Academy sponsored event at London Metropolitan University.  The event was all about programming mobile devices, and it was the third time I had been to this event.  The previous time I went along, I spoke about a new module: TT284 Web Technologies (OU website).  This time I had two purposes: to share something about the beginnings of a new module TM352 Web, Mobile and Cloud (or, more specifically, its main objectives) and to learn what other institutions are getting up to.

A case study…

The first presentation of the day was by Yanguo Jing from London Met (who has organised the event) and Alastair Craig.  They presented ‘a case study of the delivery of a year 12 summer school on mobile app development’ (I had to ask what ‘year 12’ meant: and it means 16 or 17 year olds…): this was a part of an outreach event that London Met run (where students were selected random to participate).

They described some of the challenges that they faced.  Firstly, the students who joined the summer school sometimes had no programming knowledge, and they had to make the summer school fun.  A really big challenge was to try to scaffold the learning so that the students could create something presentable by the end of the week.

At this HEA event last year, a new programming system called TouchDevelop was introduced.  TouchDevelop is a ‘touch friendly’ programming language from Microsoft Research.  (You can check out the kind of apps that have been created by visiting the apps section of the TouchDevelop site).

The language features a touch screen programming interface that is especially design to work with mobile devices; it allows users to choose only the programming constructs that can be selected (it is also graphical in the same sense that Scratch is).  One really interesting aspect of the system is that you don’t have to install anything.  TouchDevelop also creates HTML 5 code, which means that it can be run on a wide range of different devices.

The summer school lasts for a week.  The summer school begins with an introduction to the tool and a discussion of syntax.  The next two days are all about the basics of a game and the game engine.   The fourth day the students are asked to create their own game, and on the fifth day, students are asked to present their games to each other.  Masters level students acted as supervisors. One point was that it seemed that some students (who had some prior programming experience, invariably using Scratch) got ahead with everything.

A fundamental question is, ‘how do you teach people in 18 hours when you don’t know what they know?’  The trick, apparently, is to get them to do things.

Some discussion questions were: ‘is it a good idea [to run this kind of summer school]?’, and ‘does your department do something similar’, and ‘how might you scale up this type of outreach activity?’

One thing that I learnt from the discussion is that there is a new version of Scratch available.  This first presentation ended with a discussion about MOOCs, and the point was made that MOOCs are very different to outreach.

Considering the cloud: teaching mobile, cloud computing and the web

The second presentation of the day was by yours truly.  The aim of the presentation was to talk about some of the areas that a new module about cloud computing may (or may not) cover.  Towards the end of the presentation, I asked all the delegates the following questions:

  • What do you think needs to be taught (cloud, mobile, web?)
  • How might you teach these concepts?
  • What might the challenges be?
  • How might you carry out assessments?
  • How do we protect and inform about change?

As everyone discussed these questions, I made a few notes.  One of the fundamental challenges (with an OU course) is to choose technologies that are not going to age quickly.  ‘The cloud’ is a really fast moving area where there appears to be continual change and innovation; new software services and releases are coming out all of the time.  One way to counter against this is to teach the underlying concepts and not just information about the services.

Another approach is to perhaps concentrate on building a learning community.  Developers and technical specialists invariably live within a community that shares technical knowledge and expertise.  It might be interesting and useful to expose learners to the dynamics of these environments.

An interesting point was both mobile and web platforms are just different ways to consume resources.  Increasingly the ‘web’ is being equated to HTML 5, and HTML 5 is increasingly being embedded within mobile devices.

On the subject of teaching, one delegate made a really interesting and relevant point.  He said, ‘I’ve given up lecturing… half of them just turn off’.  When it comes to teaching the development of mobile apps the thing to do is to split students in to small groups; it is the learning by doing that really counts.

When it comes to assessment, one delegate said, ‘you’ve got to have a project – if you can’t develop an app, then you fail’, and it’s important to get continual updates on progress.  Other approaches might include the use of computer marked multiple-choice questions, and writing about the bigger reflections and lessons from the module.

Poster session

By way of a brief interlude, Yanguo introduced a series of posters that had been put on the wall of the meeting room.  The posters were all about different apps that students had created.  There were two indoor navigation apps, an app for parking (which made me remember one of my blog-rants about poor interaction design), some kind of ‘cash register’ virtual payment app, a food checker or testing app, and a museum guide app.

Bringing the cloud into the classroom

The third presentation of the day was by Paul Boocock, from Staffordshire University.  Paul mentioned that undergrad students are introduced to a range of different platforms: iOS, Android and Windows (if I’ve understood things correctly).  For postgraduate students, there are a number of interesting sounding modules, such as Android app development and Advanced location aware app development.  These link into different mobile technology postgraduate qualifications (Staffs University), such as their Mobile Device Application Development MSc, Postgraduate Certificate (PgCert) and their Postgraduate Diploma (PgDip).

One of the big recent changes to their curriculum is that Staffs is now including ‘the cloud’ into the different mobile modules.  One thing that I should mention is that the concept of ‘the cloud’ is understood in terms of public clouds (as opposed to private clouds that are hosted by the university).

Paul treated us with some pictures of data centres, and said ‘[the cloud] is changing how we teaching this stuff’.  He left us with an interesting idea: ‘what used to take 30 days to get up and running can now be achieved in 30 minutes’.  The point was simple: you no longer need to buy, configure and commission servers.  The benefits of ‘the cloud’ include potential lower costs, scaling and the potential of gaining global reach.  In some respects, it might become more difficult to become more directly exposed to the physical hardware that runs systems.

We were introduced a term that was unfamiliar to me: cloud computing patterns.  The term relates to the way that cloud systems can be consumed as opposed to how they are designed.  Some patterns include on/off, i.e. an application might experience high levels of demand for a while (a bit like batch jobs), that a product or system might take off very quickly (so there would be increases in demand), or there might be predicable or unpredictable bursts of traffic (such as within computer games, for example).

Paul also talked about different platforms.  He mentioned a good number that I had heard of (but I’m not intimately familiar with).  These were Amazon (of course), Microsoft, Rackspace, HP Public cloud, and Google Cloud.  Given that his focus was on public clouds for teaching purposes, he discounted HP and Rackspace (I think due to cost), and then considered Amazon.

Amazon apparently offer something called educational grants (Amazon website), which allow educators to gain free credits to allow computing students to use their services.  The trade is that students who use the Amazon systems will be able to take their skills directly into the workplace.  Apparently, you can tell them how many students you have, and then they sort out the number of licences (or credits).

We learnt that Microsoft (of course) run a similar scheme, which enable students to use Azure academic passes (Microsoft Azure website).  Google was not considered as an alternative since there are no current discounts for non-profit organisations.  In the case of Staffordshire, Paul opted for Microsoft mainly because they had already made an investment into Microsoft tools and environment.

Before a live coding demo, which featured a pre-built service (from what I’ve noted) we were given a brief description of the different Azure components (or Azure services).  These were: compute, app services, data services, and network (this reminded me that I’ve come across similar terms when looking at the open source equivalent called OpenStack).

At the end of Paul’s session there was a lot of time for discussion.

Points of discussion included the challenge of working with different SDKs, and the emphasis on design patterns.  On the masters course, student were asked to create an interactive chat app that wasn’t not too dissimilar to the hugely popular WhatsApp.

Of course, there are always challenges that educators need to be mindful of.  These include the need to change modules without increasing their difficulties, and the question of how to assess everything if everything exists in the cloud (and students create services using lots of template code).  One way to do this is, of course, to ask students to write a reflective report about what they did to get a sense of what they understand.

All in all, it was both really interesting and really useful to know how another institution had successfully tackled the introduction of programming the cloud into their computing curriculum.

Developing digital literacies

The fourth talk of the day was by Terry McAndrew, which had the subtitle, ‘how students can quickly create interactive media resources for your curriculum’. Terry spoke about the broad subject of ‘digital literacy’ which can be defined as ‘the ability to effectively engage with a range of digital technologies to create, navigate and manipulate information’.  Terry mentioned a resource known as a JISC Digital Literacy InfoKit (JISC website).   The key contains seven different areas, which are: information literacy, media literacy, communication and collaboration, career and identify management (which I understand to be a new bit), ICT literacy, learning skills and digital scholarship.  A two year digital literacy programme (JISC) was also mentioned.

Interestingly, Yanguo mentioned some digital literacy resources that were available from London Met.  There’s also another bunch of digital literacy resources from the University of Southampton.  All these different resources made me realise that perhaps this is an area that I really need to catch up on.

Another part of Tony’s presentation centred upon accessibility.  Tony mentioned a tool called Xerte (University of Nottingham) which can be used to create accessible digital material which can be delivered through a virtual learning environment to different devices.  It’s a tool that is sometimes used by students who are studying a module that I tutor, H810 Accessible online learning: supporting disabled students (OU website).  The content that is delivered is presented using HTML 5, but the editor uses Adobe Flash (we were, however, told that there are plans afoot to develop an HTML 5 based editing environment).

Two other interesting links (and projects) mentioned were JORUM, a repository of digital educational material that can be shared between different institutions.  JORUM has been going for quite a while, and I hadn’t heard it mention for quite some time.  Having a quick look at the JORUM site quickly tells me that it has changed quite a bit since I first looked at it properly (which must have been around six or seven years ago).  The second reference was to a project called ACTOER, which is an abbreviation for Accessibility Challenges and Techniques for Open Educational Resources (of which Terry, who is based at TechDis, is the project manager).

I enjoyed Terry’s talk, and I found his presentation of different digital literacy resources useful, but there was little about the learning and teaching of how to program mobile devices.  This said, accessibility is always really important, and it’s something that designers of curriculum need to always be mindful of: I welcomed Terry’s reminders.

Alignment of mobile learning agenda with learning and teaching strategies in HEIs

The final presentation of the day was by Remy Olasoji from the University of East London.  From what I remember, I understand Remy to be an expert in the field of requirements engineering.  He presentation was about taking lessons from requirement engineering to try to understand how best to make use of mobile technology.

A final question of the day was, ‘how do we drive the mobile agenda forward?’  A simple answer was: ‘mobile is already happening – it’s driving forward of its own accord’.  One challenge lies with figuring out how to teach the fundamentals of mobile technologies to enable students to be thoroughly equipped and prepared when they have to work with new and changing devices.  Another challenge lies with figuring out how to best make use of devices to help students with their studies.

Reflections

All in all, a useful event; it’s always useful to hear what happens within other institutions and to learn about what challenges educators need to overcome.  One area that I would like to have heard more discussion about is information and data security.  The ‘cloud’ exposes these issues quite naturally, along with issues that relate to business and management.

Permalink Add your comment
Share post
Christopher Douce

OpenStack conference, June 2014 (part 1 of 2)

Visible to anyone in the world
Edited by Christopher Douce, Friday, 6 June 2014, 17:43

On 4 June, I went to an event that was all about something called OpenStack.  OpenStack is an open source software framework that is used to create cloud computing systems.  The main purpose of this blog is to share my notes with some of my colleagues, but also to some of the people who I met during the conference.  Plus, it might well be of interest to others too.

Cloud computing is, as far as I understand it, a broad terms that relates to the consumption and use of computing resources over a network.  There are a couple of different types of cloud: there are public clouds (which are run by large companies such as Amazon and Google), private clouds (which are run by a single organisation), and hybrid clouds (which is a combination of public and private clouds).  There’s also the concept of a community cloud - this is where different organisations come together and share a cloud, or resources that are delivered through a cloud.

This is all very well, but what kind of computing resources are we talking about?  As far as I know, there are a couple.  There’s software as a service (or SaaS).  There’s PaaS, meaning, Platform as a Service, and there’s IaaS, which is Infrastructure as a Service.  Software as a Service is where you offer software through a web page, and you don’t ever touch the application code underneath.  Infrastructure as a Service is where you might be able to manage a series of ‘computers’ or servers remotely though the cloud.  More often than not, these computers are running in something called virtual machines.

These concepts were pretty much prerequisites for understanding what on earth everyone was talking about during the day.  I also picked up on a whole bunch of new terms that were new to me, and I’ll mention these as I go.

Opening Keynote : The OpenStack Foundation

Mark Collier opened the conference.  Mark works for the OpenStack Foundation (OpenStack website). During his keynote he introduced us some of the parts that make up OpenStack (a storage part, a compute part and a networking part), and said that there is a new software release every six months.  To date there are in the order of approximately 1.2k developers.  The community was said to comprise of approximately 350 companies (such as RedHat, IBM, HP, RackSpace) and 16k individual members.

Mark asked the question: ‘what are we trying to solve?’  He then went onto quote Mark Andreessen who said, ‘software is eating the world’.  Software, Mark said, is said to be transforming the economy and disrupting industries. 

One of the most important tools in computer science is abstraction.  OpenStack represents a way to create a software defined data centre (a whole new level of abstraction), which allows you to engineer flexibility to enable organisations to move faster and software systems to scale more quickly.

Mark mentioned a range of different companies who are using OpenStack.  These could be considered to be superusers (and there’s a corresponding superuser page on the OpenStack website which presents a range of different case studies).  Superusers include organisations such as Sony, Disney and Bloomberg, for example.

I remember that Mark said that OpenStack is a combination of open source software and cloud computing.  Another link that I noted down was to something called the OpenStack marketplace (OpenStack website).  Looking on this website shows a whole range of different Cloud distributions (many of which come from companies that offer Linux distributions).

Keynote: Canonical, Ubuntu and OpenStack

Mark Shuttleworth from Canonical (Canonical website) offered an industry perspective.  Canonical develops and supports Ubuntu which is a widely used Linux distribution.  (It is used, as far as I can remember in the TM129 Technologies in Practice module).  As well as running on the desktop, Ubuntu is widely used on the server side, running within data centres.  A statistic I’ve noted down is that Ubuntu accounts for ‘70% of guest workloads’.  What this means is that we’re talking about instances of the Linux operating system that have been configured and packaged by Ubuntu (that are running on a server within a datacentre, somewhere).

A competitor to Ubuntu is another Linux distribution called CentOS.  There is, of course, also Microsoft Windows Server.  When you use public cloud networks, such as those provided by Amazon, I understand that you’re offered a choice of the operating system that you want to ‘host’ or run.

An interesting quote is, ‘building your cloud is a bit like building your own mainframe – users will always want it to be working’.  We also heard of something called OpenStack Interoperability Laboratory.  Clouds can be built hundreds of times a day, we were told – with different combinations of technology from different vendors.  ‘Iteration is the only way to understand the optimal architecture for your use case’.

A really important aspect of cloud computing is the way that a configuration can dynamically adapt to changing circumstances (and user demands).  The term for how this is achieved (in the cloud computing world) seems to be ‘orchestration’.  In OpenStack, there is a tool called JuJu (Wikipedia).  JuJu enables (through a dashboard interface) different combinations of components to be defined.  There is a concept of a ‘charm’ (which was described as scripts which contain some operational coding).  If you would like to look at what it is all about, there’s a website called JuJu Charms that I’ve yet to spend time exploring.

I’ve also noted down something called a Service Orchestration Framework, which lets you place services where you want, and on what services.  There are some reference installations for certain types of cloud installations (which reminds me of the idea of ‘design patterns’ in software).

Mark referred to a range of different technologies during his talk, some of which I had only very briefly heard of.  One technology that was referred to time and time again was the concept of the hypervisor (Wikipedia).  I understand this to be a container (either hardware or software) that runs one or more virtual machines.  Other terms that he mentioned or introduced include KVM (Kernel-based virtual machine), Ceph (a way to offer shared storage), and MaaS, or Metal as a Service (Ubuntu), which ‘brings the language of the cloud to physical servers’.

A further bunch of mind boggling technical terms that were mentioned include ‘lightweight hyppervisors’ such as LXC (LinuX Containers), Hadoop, which is a data storage framework, and TOSCA (Wikipedia), which is an abbreviation for Topology and Orchestration Specification for Cloud Applications.  In terms of databases, some new (and NoSQL) technologies that were mentioned included MongoDB and Cassandra.

At this point, it struck me how much technologies have changed in such an incredibly short time, reminding me that we live in interesting times.

Keynote: Agile infrastructure built in OpenStack

The second keynote of the day was by John Griffith, Project Technical Lead, SolidFire.  John’s presentation had the compelling subtitle: ‘building the next generation data centre with OpenStack’.

A lot of people started using Amazon, who I understand to be the most successful public cloud provider, to use IT resources more efficiently.  There are, of course, other providers such as Google compute engine (Google), Windows Azure (Microsoft), and SoftLayer (which appears to be an IBM company).

A number of years ago, at an OU postgrad event, I overheard a discussion between two IT professionals that began with the question, ‘so, what are the latest developments in servers?’  The reply was something about server consolidation: putting multiple services on a single machine, so you can use that one machine (a physical computer or server) more efficiently.  This could be achieved by using virtual machines, but you can only do so much with virtual machines.  What happens if you run out of processing power?  You need to either get a faster machine, or move one of your virtual machines to another machine that might be under-utilised.

The next generation data centre will be multi-tenant (which means multiple customers or organisations using the same hardware), have mixed workloads (I don't really know what this means), and have shared infrastructure.  A key aspect is that an infrastructure can become software defined, as opposed to hardware defined, and the capacity of a cloud configuration or setup can change depending upon local demand.

There were a number of attributes of cloud systems.  I think there were: agility, predictability, scalability and automation.

In the cloud world applications can span many virtual machines, and data can be stored in scalable databases that are structured in many tiers.  The components (that make up a cloud installation) can be configured and managed through sets of predefined interfaces (or APIs).  I also made a note of a mobile app that can be used to manage certain OpenStack clouds.  One example of this is the Cloud mobile app from Rackspace.

Another interesting quote was, ‘[the] datacentre is one big computer and OpenStack is the operating system’.  Combining servers together has potential benefits in terms of power consumption, cooling and the server footprint.

One thing that developers need to bear in mind is how to create applications.  Another point was: consider scalability and plan for failure.  A big challenge lies with uncovering and deciphering what all the options are.  Should you use, for example, block storage services, or object storage?  What are the relative advantages and disadvantages of each?

Parts of this presentation started to demystify some of the terms that have baffled me from the start.  Cinder was, for example, is OpenStack’s block storage.  Looking outwards from the operating system, a block storage device could be a hard disk, or a USB drive.  Cinder, in effect, mimics what a hard drive looks at, and you can store stuff to a Cinder service as if it was a disk drive.  Swift is an object database where you can store object.  So, you might think of it in terms of sets of directories, the contents of which are replicated over different hard drives to ensure resilience and redundancy.

There is a difference between a service that is an abstraction to store and work with data, and how physical data is actually stored.  To make these components work with actual devices, there are a range of different plug-ins.

Presentation: vArmour

I have to admit that I found this presentation thoroughly baffling.  I had no idea what was being presented until I finally picked up on the word ‘firewall’, and the penny dropped: if a system architecture is defined in software, the notion of a firewall as a physical device suddenly becomes very old fashioned, if not a little bit quaint.

In the cloud world, it’s possible to have something a ‘software firewall’.  A term that I noted down was ‘software defined security’.  Through SDS, you can define what traffic is permissible between nodes and what isn’t, but in the ‘real world’ of physical servers, I’m assuming that physical ‘top layer’ firewalls are important too.

I also came across two new terms (or metaphors) that seem to make a bit of sense in the ‘cloud world’.  Data could, for example, move in a north-south direction, meaning it goes up and down through various layers.  If you’ve got east-west movement of data, it means you’re dealing with a situation where you might have a number of different virtual machines (that might have been created to respond to end user demand), which may share data between each other.  The question is: how do you maintain security when the nature of a configuration might dynamically change? 

Another dimension to security which crossed my mind was the need for auditability and disaster recovery, and both were subjects that were touched upon by other presenters.

In essence, I understood vArmour to be a commercial software defined security product that works akin to a firewall that can be used within a cloud system.

Presentation: The search for the cloud’s ‘God Particle’

Chris Jackson, who works for Rackspace (a company which has the tagline ‘the open cloud company’), gave the final presentation before we all broke for lunch.  Chris confessed to being a physicist (as well as a geek) and referred to research at CERN to find ‘the God particle’.  I also seem to remember him mentioning that CloudStack was used by CERN; there’s an interesting superuser case study (OpenStack website), for those who might be interested.

Here’s the question: if there is a theory that can describe the nature of matter, is there a theory that might explain why a cloud solution might not be adopted?  (He admitted that this was a bit of fun!)  He presented three different theories and asked us to vote on which were, perhaps, the most significant.

The first was: application.  Some applications can be rather fragile, and might need a lot of cosseting, whereas other forms of application might be very robust; they’re all different.  Cloud applications, it is argued, embrace chaos and build failure into applications.  Perhaps the precise character of certain applications might not lend it to being a cloud application?

Theory two: integration.  There could be the challenge of integration and connection with existing systems, which might themselves have different characteristics. 

The third theory is all about operations.  This is more about the culture of an organisation.

So, which theory is the reason why organisations don’t adopt a cloud solution?  The answer is: quite possibly all of them.

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2312404