City Uni talk: User-centred government links

Hi everyone!

Thank very much for your time yesterday. I really enjoyed visiting City Uni again.

I mentioned a lot of different websites and links. If anyone wanted to read more about those thing,  here are some links to the things I mentioned.

Home Office Digital Data and Technology

If you’re interested in reading more about the work we do in Home Office Digital Data and Technology have a look at our department blog.

I mentioned the Electronic Visa Waiver project I worked on for citizens from Kuwait, UAE, Oman, and Qatar and how we launched the service using a “private-beta”. This allowed us to do summative usability testing, of the whole system, end-to-end. I wrote a short blogpost about the project. It mentions some of the real-world “aaaagggggggh” moments.

Kate Tarling head of service design in the Home Office wrote a really good blogpost on what service design is and what service designers do in government.

User centred government

If you’re interested in learning more about how user centred design is done in government, these links might be useful.

Wikipedia have a good article explaining what GDS is and where it came from.

The most useful things produced by GDS are the GOV.UK Service manual. This is like the user centred design “playbook” for people working in government. It’s a great mine of knowledge to combine with your HCI knowledge.

The other useful thing GDS have produced is the GDS Service Standards.

We use these as the required criteria we have to meet when we build “a thing”. There are 18 points we have to meet. They are the carrot and stick for us.

The GDS Design notes blog  is a really useful blog if you’re interested in interaction design.

The GDS Research blog is really useful if your interested in learning about our work understanding users.

Agile and government

As I mentioned we try to work using agile development methods. The service manual has a good introduction to agile government development.

Again the GDS Service Manual has a good explanation of the different phases of a project, from how each phase works,  from the discovery phase, to alpha, to beta, until live phase.

Some of the evaluations we use

Heuristic evaluations

We use Jakob Nielsen’s 10 heuristics for user interface design. I also mentioned we use a modified version of these for doing access needs and accessibility heuristics evaluation. They were put together by Judith Fellowes.

Content evaluation

A really handy, useful way to evaluate language is putting it in front of the user, and ask them to highlight with one highlighting pen the words that are important, and – with another pen the words that they don’t understand.

Pete Gale wrote a great blogpost explaining how he used this technique in testing content with users.

There are a number of useful automated applications to evaluate content. Readable.io gives you a baseline readability level of paragraphs of text.

There’s also an app called Hemmingway App which does a similar thing to Readable.io.

You can also use Google Trends to see if the words being tested are actually words that users use.

Get in touch

If you’re interested in anything here, get in touch!

Leave a comment here (down there at the end) , or you can get me on Twitter or email.

Seen in India: Biometric staff tracking systems

While we were travelling around in India earlier this year, I saw lots of interesting, fascinating, curious, and worrying things.

At the door of a hotel we were staying in in one city, I saw this biometric 1)Wikipedia article on Biometrics https://en.wikipedia.org/wiki/Biometrics recognition system by a company called Kanoe Systems 2)Kanoe company website http://www.kanoe.com/time-attendance who make “time and attendance systems”.

Biometric recognition systems recognise humans and grant (or deny) privileges based on “physiological characteristics are related to the shape of the body. Examples include, but are not limited to fingerprint, palm veins, face recognition, DNA, palm print, hand geometry, iris recognition, retina and odour/scent”. 3)Extract from Wikipedia article on Biometrics

The purpose of the system in this photo is to record staff entry and exit times of people, and tie it to biometric characteristics, in this case possibly fingerprint and face.

The green glowing panel on the right hand side is the fingerprint reader, while the black square with the purple lights around it is the facial recognition camera.

The member of staff presumably must agree to register their finger prints, and face in order to work at the hotel.

From reading their website, these entries and exits can then be tied to payroll systems.

Since I wasn’t registered, it wouldn’t recognise my face. (Img 1 and 2 below)

 

Hotel staff biometric recognition system
Img 1: Hotel staff biometric recognition system. It was trying to recognise my face.
Hotel staff biometric recognition system
Img 2: Hotel staff biometric recognition system, no face on file – “not found”.

I hope the users biometric data is securely stored, access to it is audited to make sure only required people have access and it can be deleted once they leave.

It is worrying that these systems are being used for recording staff entry and exits. It shows a lack of faith in staff members. I wonder what staff members think?

 

References   [ + ]

1. Wikipedia article on Biometrics https://en.wikipedia.org/wiki/Biometrics
2. Kanoe company website http://www.kanoe.com/time-attendance
3. Extract from Wikipedia article on Biometrics

Léonie Watson talking web APIs at Paris Web

Paris Web 2016

I attended Paris Web 2016 last week (more on that later) where Belén Barros Pena and I spoke about our DIY Mobile Usability Testing project.

Léonie Watson and Charles McCathieNevile both working with W3C Standards spoke about HTML 5.1 and web APIs. I had a quick chat with Léonie afterwards (in a noisy breakout area).

She explained about the different input and output modalities being brought into web browsers natively with these APIs.

They’re very interesting for the future native web interfaces – being able to speak to your browser natively, being able to get notifications of events in your browser, not relying on the underlying operating system.

They also provide possibilities for native accessibility support in the browser – a screen reader user could have her browser read out the web page for her, a Dragon Dictate user can speak to his browser to search for something on the web.

Audio recording

The interview is about 7 minutes long.

Transcript

00:00 Can you tell me your name please?

It’s Léonie Watson.

00:07 And what do you do?

I’m communication director and a principal engineer at the Paciello Group, known as TPG, and also part of the accessibility team at government digital service working on the GOV.UK platform.

00:21 How did you get involved in the work that you do?

Completely by accident. I was a web designer in the mid 1990s, lost my sight in 2000, and after a pause to get life back together, went back to work and by accident ended up working for an agency that specialised in UX and accessibility focused web development.

00:45 Do you come from a more development background?

Well, back in the 90s it was a bit of everything – so you did a bit of the design, a bit of the backend functionality, a bit of the HTML, when it came along CSS and those sorts of technologies, so..

There wasn’t any real distinction between roles, like we have now, back in those days. You just created stuff and did it all pretty much.

1:11 Today your talk, at Paris Web, was about HTML5.1 and web platforms. You spoke about different web APIs. Can you explain what a web API is?

Most web APIs are Javascript APIs, which means you use Javascript within a web application, something that sits in the browser to do all sorts of different uninteresting things. There are lots of APIs to choose from. And we looked at a few of them at Paris Web today.

01:20 You spoke about different APIs, push, vibration, and speech. Why are they important for web users? What can they do?

In general we are seeing a big move away from software and native apps to creating what we call web apps. So, the same things but they work in the browser using web technologies.

So things like the push API allow us to bring push notifications, just as we get them on the smartphone platforms, into browser based applications.

Vibration API does something similar, it brings haptic feedback, again like you get on most mobile devices, so you can get most mobile devices to vibrate and shake.

The web speech API is a little different it allow you to create web applications that will either speak out loud to you or respond to your speech input.

So for example recreating a GPS application in the browser is now possible. You can ask it for directions and it will speak those directions back out to you.

03:00 Why is it important to have those capabilities in web browser? Why not just use native smartphone apps?

It’s very expensive to develop, innovate native applications. You have to develop them for iOS, for Android maybe other platforms as well, depending on what your requirements are.

There are technologies that will let you use web technologies and translate them into native code, but you’ve still got double the amount of testing, distribution mechanisms. It just massively complicates the entire development ecosystem.

It also seems that users are less and less inclined to download apps from anything other the big providers, like Facebook and Twitter.

The research is suggesting that barely any apps are used. Most users have about 3 apps open at any 1 time. Most of us have apps we use regularly on the home screen, and then 2 or 3 screens worth of redundant apps that we’ve downloaded, used twice and haven’t had a look at since.

There’s a lot to be said for using the universally available and accessible technologies that come with the web stack.

04:13 What do Web API capabilities mean for users with access needs, like visual, auditory or cognitive disabilities?

So in some cases I think there is huge potential.

The Vibration API for example, if you used it in combination with the Push API means your web application could deliver a visible push notification, which sighted people would obviously see, which screen readers would be alerted to by the browser and would read automatically.

But if you team it up with the Vibration API, then you can attach a vibration pattern to it then so someone who is using a screen magnification and not looking at the portion of the screen where the notification appeared would be altered.

Or equally someone who had a hearing impairment would get the vibration notification instead of any audio representation that came with it. So there are some really useful ideas I think.

The talk I gave today looked at the example of using speech output on a web application to provide hints for people who are less experienced using the web particularly complex widgets like tap panels or sliders.

So its very easy to set it up, you can turn on the option to have spoken help to assist you. It can give you advice to how to use or interact with a particular component on a screen.

Again I think there are a lot of options on how we can use these APIs to make a better user experience for people with disabilities a lot more interesting if not a lot more convenient.

05:51 If you weren’t working in Internet and web standards, what would you be doing?

(Laughs), I have absolutely no idea. I am so thoroughly in love with what I do that I really can’t imagine doing any thing else. I dunno actually, (pauses) professional book reader maybe?! I like cooking as well so, yeah, maybe I’d be working in the food industry. I don’t know!

06:21 If people want to read more about what you do, or contribute to web standards, where do they find more information?

Well, we’d definitely like more people to come contribute at W3C. It’s getting easier all the time.

If you’d like to reach out to me via my blog, which is tink.uk, I’m on Twitter – @leoniewatson – otherwise my email is tink@tink.uk and you can reach me there. It’d be super to hear from people.

06:52 Thanks a lot Leonie.

Cheers, bye!

Pink is for ladies, red is for men.

20160225 IMG 20160225 002927

These “pink taxis” are for Muslim women who do not want to get into taxis with men. They are driven by women, for women passengers only.

“Our society is conservative and our women do not want to ride with men alone at night or when they arrive at the airport at two or three in the morning,” Ammar Bin Tamim, director of Dubai Taxi, told Reuters.

Men can use the “red taxis”.

Launching a service in Kuwait – adjusting your user research

[My blog post was originally published on the Home Office Digital blog on the 27th of November. This is an archive, with some added information such as links and specific dates.]

Walking back to our hotel after a busy day during the EVW private beta launch in Kuwait city.
Walking back to our hotel after a busy day during the EVW private beta launch in Kuwait city.

On the 23rd of November, Monday morning, we successfully launched the Electronic Visa Waiver (EVW) service private beta in Kuwait to very little fanfare. The way a private beta should be launched.

The launch was combined with a user research field trip so we could test the system with these beta users.

I’ve been on many research trips in the past, but this user research trip was different: we were watching our users interact with our service in anger. This wasn’t a simulated scenario in a lab. It wasn’t a simulated scenario at someone’s home. This was for real.

It went smoothly. At the time of writing, 10 participants had made applications and 28 EVWs had been approved – for the individuals making the applications, and members of their families. The way we worked helped.

Planning

Like any research activity, planning is very important. It helps you identify what you want to get out of the research. It also helps you to anticipate things that might go wrong.

For our private beta in Kuwait we based our research plan on our previous user research trips to UAE and Kuwait.

We learned from previous difficulties. This time around we created guidelines for local translators to help, and being more aware of the culture, we scheduled the sessions for later in the day – between 12pm and 7pm) to make it more convenient for the users. We had iterated our user testing, and this made our launch easier.

We planned for things to go wrong – participants coming without the necessary documents, participants being delayed, technical issues.

When things went wrong, we were ready, we adapted and our users appreciated it.

Houston, can you hear me?

Communication with the participant

During the recruitment phase we put a lot of emphasis on explaining to the potential participants that they needed to have travel and accommodation booked in order to take part in our research.

Normally, you would not tell the user too much information in advance, but we decided to do this because this was a private beta launch – the user would be using the service for real.

On Thursday we were visited by Tobias Elwood, Minister at the Foreign and Commonwealth Office, and Matthew Lodge, the British Ambassador to Kuwait. Not a usual occurrence during a research session, you’ll agree.

The presence of high-ranking officials wasn’t ideal, as it put many of the participants on edge. It was a necessary part of the service launch, however, and when we explained the situation to the participants they felt more at ease.

Communication with the team

Communication between team members is important during any usability testing session. It’s even more important when your users are using your service for real, and your team members are 6,000km away.

A screenshot of the Slack messaging channel used for the launch of the EVW private beta
A screenshot of the Slack messaging channel used for the launch of the EVW private beta

During the live applications we kept in touch with our team in London via Slack. We alerted the team when the participant was about to submit the form so that the caseworkers would be ready.

This real-time communication is vital when a participant is sitting beside you in one country and the outcome of their actions is only seen in another.

It allowed us to identify issues, notify the necessary people and manage the participants’ expectations.

You’ve gotta roll with it

This user research trip was quite different from our normal work. It was a combination of user testing and service launch, meaning we needed to perform a complicated several role: part user researcher, part troubleshooter, part tester, and part childminder.

A participant using the EVW service on their smartphone. The recording was done with our DIY mobile usability testing kit.
A participant using the EVW service on their smartphone. The recording was done with our DIY mobile usability testing kit.

Some participants were mobile device users and preferred to use their smartphones, so we tested them using their phones.

As anticipated, some participants came without travel details, so we had sample travel arrangements prepared for them. In these cases we used those sessions as more formative research.

A picture of a young boy playing with post-it notes and sharpies
An unexpected, but welcome visitor.

Some participants came with their children, so we kept them occupied while their parents took part in our research.

A screenshot of an SMS from a user research participant
We kept in touch with our participants to make sure they received the EVW email notifications.

When something went wrong, we waited for a suitable moment, so we didn’t compromise the research, and then explained to the user what had happened. We then worked with them to find an answer, and ensured they left feeling satisfied.

What’s valuable is how we use what we have learned to inform the next service iteration.

We will use these learnings for next week when we continue with our research and continue to improve the service.

Let’s share

Do you have a different approach or thoughts on what we’re doing? We’re open to further improvements in how we approach digital transformation and would love to get your thoughts, inputs and experience in the Comments section below or via twitter.

W3C public-privacy thread

uT8ì4XY0R È42R0X0Z8UT U5 ÓíS0T é867ZYÕXU2R08S43 1ó Z74 uT8Z43 À0Z8UTY Ì4T4X0R ÂYY4S1Ró 8T Õ0X8Y UT õô È424S14X õûáÂXZ82R4 õúDÀU UT4 Y70RR 14 Yí1942Z43 ZU 0X18ZX0Xó 8TZ4X54X4T24 i8Z7 78Y VX8ì02óC 50S8RóC 7US4 UX 2UXX4YVUT34T24C TUX ZU 0ZZ02QY íVUT 78Y 7UTUíX 0T3 X4VíZ0Z8UTD Íì4XóUT4 70Y Z74 X867Z ZU Z74 VXUZ42Z8UT U5 Z74 R0i 0608TYZ Yí27 8TZ4X54X4T24 UX 0ZZ02QYD