Paris Web 2016
I attended Paris Web 2016 last week (more on that later) where Belén Barros Pena and I spoke about our DIY Mobile Usability Testing project.
Léonie Watson and Charles McCathieNevile both working with W3C Standards spoke about HTML 5.1 and web APIs. I had a quick chat with Léonie afterwards (in a noisy breakout area).
She explained about the different input and output modalities being brought into web browsers natively with these APIs.
They’re very interesting for the future native web interfaces – being able to speak to your browser natively, being able to get notifications of events in your browser, not relying on the underlying operating system.
They also provide possibilities for native accessibility support in the browser – a screen reader user could have her browser read out the web page for her, a Dragon Dictate user can speak to his browser to search for something on the web.
The interview is about 7 minutes long.
00:00 Can you tell me your name please?
It’s Léonie Watson.
00:07 And what do you do?
00:21 How did you get involved in the work that you do?
Completely by accident. I was a web designer in the mid 1990s, lost my sight in 2000, and after a pause to get life back together, went back to work and by accident ended up working for an agency that specialised in UX and accessibility focused web development.
00:45 Do you come from a more development background?
Well, back in the 90s it was a bit of everything – so you did a bit of the design, a bit of the backend functionality, a bit of the HTML, when it came along CSS and those sorts of technologies, so..
There wasn’t any real distinction between roles, like we have now, back in those days. You just created stuff and did it all pretty much.
1:11 Today your talk, at Paris Web, was about HTML5.1 and web platforms. You spoke about different web APIs. Can you explain what a web API is?
In general we are seeing a big move away from software and native apps to creating what we call web apps. So, the same things but they work in the browser using web technologies.
So things like the push API allow us to bring push notifications, just as we get them on the smartphone platforms, into browser based applications.
Vibration API does something similar, it brings haptic feedback, again like you get on most mobile devices, so you can get most mobile devices to vibrate and shake.
The web speech API is a little different it allow you to create web applications that will either speak out loud to you or respond to your speech input.
So for example recreating a GPS application in the browser is now possible. You can ask it for directions and it will speak those directions back out to you.
03:00 Why is it important to have those capabilities in web browser? Why not just use native smartphone apps?
It’s very expensive to develop, innovate native applications. You have to develop them for iOS, for Android maybe other platforms as well, depending on what your requirements are.
There are technologies that will let you use web technologies and translate them into native code, but you’ve still got double the amount of testing, distribution mechanisms. It just massively complicates the entire development ecosystem.
It also seems that users are less and less inclined to download apps from anything other the big providers, like Facebook and Twitter.
The research is suggesting that barely any apps are used. Most users have about 3 apps open at any 1 time. Most of us have apps we use regularly on the home screen, and then 2 or 3 screens worth of redundant apps that we’ve downloaded, used twice and haven’t had a look at since.
There’s a lot to be said for using the universally available and accessible technologies that come with the web stack.
04:13 What do Web API capabilities mean for users with access needs, like visual, auditory or cognitive disabilities?
So in some cases I think there is huge potential.
The Vibration API for example, if you used it in combination with the Push API means your web application could deliver a visible push notification, which sighted people would obviously see, which screen readers would be alerted to by the browser and would read automatically.
But if you team it up with the Vibration API, then you can attach a vibration pattern to it then so someone who is using a screen magnification and not looking at the portion of the screen where the notification appeared would be altered.
Or equally someone who had a hearing impairment would get the vibration notification instead of any audio representation that came with it. So there are some really useful ideas I think.
The talk I gave today looked at the example of using speech output on a web application to provide hints for people who are less experienced using the web particularly complex widgets like tap panels or sliders.
So its very easy to set it up, you can turn on the option to have spoken help to assist you. It can give you advice to how to use or interact with a particular component on a screen.
Again I think there are a lot of options on how we can use these APIs to make a better user experience for people with disabilities a lot more interesting if not a lot more convenient.
05:51 If you weren’t working in Internet and web standards, what would you be doing?
(Laughs), I have absolutely no idea. I am so thoroughly in love with what I do that I really can’t imagine doing any thing else. I dunno actually, (pauses) professional book reader maybe?! I like cooking as well so, yeah, maybe I’d be working in the food industry. I don’t know!
06:21 If people want to read more about what you do, or contribute to web standards, where do they find more information?
Well, we’d definitely like more people to come contribute at W3C. It’s getting easier all the time.
06:52 Thanks a lot Leonie.