Podcast Episode 20 - Real-time Refreshable 2.5D Displays at MIT

This week on the IAP, we talk about some research at MIT that is paving the way for some very useful tools for the visually-impaired.

Show Notes & Links


Announcer: This is the IAP - Interactive Accessibility Podcast with Mike Guill and Mark Miller. Introducing Mike Guill and Mark Miller.

Mark Miller:Hey, welcome to the IAP. Thanks for helping us keep it accessible. Do us a favor. If you're enjoying the IAP, share it. Tell someone about it. Hey, even link to it from your accessible website. Mike how are you doing?

Mike Guill: Good, Mark. I'm feeling a little bit behind on things. We're recording this week's episode the very last minute.

Mark: Yeah, the last second. If we recorded it any later, it would just be late.

Mike: Yeah. So we can blame the technical issues though, right?

Mark: Yeah, funny changes in the equipment that we use or the software we use to record the IAP. It goofs up a little bit.

Mike: I wish we had an intern, we can blame it on them.

Mark: I have a dog that I treat like an intern. It doesn't listen, but I'm like, "Hey, get me coffee" and the dog lays down and sleep instead of getting coffee.

Mike: That's pretty close.

Mark: So listen, this little device that you've come up here – little device, it's not a little device, but it's called inFORM. It's sort of a new technology that this group is playing around with that is a 2.5D interface. This thing is just cool! I want to play with one.

Mike: I do too! Yeah, first of all, it's a gigantic device. It's like the size of a full-sized copy machine that sits on the floor.

Mark: Yeah, well essentially, it's like two copying machines next to each other and then networked.

Mike: This is out of somebody from the MIT Lab. They made this device. We mentioned 3D printing on the show I think a few times. The reason they call this 2.5D is because instead of an object being printed out of a material, it's actually assembled from pins that are pushed up on a board similar to those toys that you see in the department store where you can push your hand and it makes the shape of the hand or you put your face on it and you leave it on the shelf with some sort of...

Mark: Yeah, sometimes you make an inappropriate shape with your hand and then you leave it on the shelf. Mike, I know how you operate.

Mike: Yeah, I'm busted. In this case, the pins on the table are being pushed up based on what the camera sees below and it does it in real-time. This is super cool. We'll link to this in the show notes, but in the video, you can see the developers holding a basket, rolling a ball around the basket.

Mark: So literally, it's like they stick a basket in there, they stick their hands and they're cupped. So the pins like you call them are actually like little columns that pop up so that shape is made on what starts off as just a flat 2D surface. And then it's got the 2.5D, it pops up. You stick a ball in there as if the fake hands are holding it and then as you move around your real hands, the ball kind of rolls with it. It's crazy! It's like a moving sort of topographical map of whatever you've got underneath that camera... in real-time!

Mike: Exactly! In real-time. And I know we like to focus on digital accessibility, but the reason that I brought this one up is because I can see this sort of transition from something an object like this, a device like this, that can represent objects into the digital accessibility world where we have so many things – we've talked with graphs and charts before, the ability for a device like this to display shapes, graphs, charts, maps, you name it, to somebody who is visually disabled is amazing.

Mark: Well, yeah. I mean I would argue that we are talking about digital accessibility here and I'll go into it in a little bit, but I would also argue that we are talking about something that is going to bridge both digital accessibility and architectural accessibility or physical accessibility if we can say that as this thing develops. But you're absolutely right. I know we talked about before that I was at the Boston Accessibility Unconference. I don't know when was that, a month or so ago. One of the questions that we discussed at that was how to represent complex scientific graphs to people who are blind. It's a very particular problem, one that is hard to solve. It's sort of the last hold-out I think where from a digital standpoint, if you have sight, you can take complicated data, you can re-represent it in multiple ways and have an easy-to-consume graph that changes, "I want a pie graph. I want a bar graph. I want data points to be on the graph. I want these two data points to be a graph and this graph can just change and change and change and change." We're talking from an accessibility standpoint, we're always talking about a similar experience, so how do you give a person without sight that similar, "Here's a really quick high-level representation of this complicated data," which we experience as a graph. How do you give that to a blind person? That's tough. We didn't even come up with a great answer. We came up with lots of great ideas. The thing that we kind of wished for was very similar to this device. I can imagine like, "Here it is in a bar graph. Here it is in a pie graph. Here it is with different data in it" happening and a blind person being able to run their fingers over this thing and really get that same experience that a person with site gets when they look at a graph. So to me, it's very exciting from a digital world and then nevermind the implications of this thing representing a picture, you know what I mean? Here's the way I think of it. Right now, I can jump on Skype and Skype my brother in Corpus Christi, Texas. I'm on the East Coast of New England, so I can Skype my brother in Corpus Christi, Texas and I can see my niece's face and that brings back for me similar emotion or experience as seeing her in person. It's not perfect, but it's better than just hearing her voice or not seeing her at all, right? That's why we look at pictures. So what if this thing in later iterations was putting up a 3-dimensional face and that blind person who can reach out and touch that face when they're in person with somebody can now reach out and touch something that really, really is similar to that face. You see what I'm saying? I just think the implications – I think that where we're really stuck with visual representations of stuff, this can get us past.

Mike: I totally agree. I was thinking of 2-dimensional representation of things like – you know, simple things like maps. We talked about graphs and charts, but I was thinking about maps. Sometimes it's difficult to represent interior space maps, buildings, rooms, hallways especially in a big place. If you go to a big conference, for example, I know I brought up CSUN, this conference for people with disabilities that goes on every year in the early spring in San Diego, they have thermoformed maps that are embossed and you basically flip through this spiral notebook and it's got all the the floors of the building and where the elevator banks are and where the restrooms are and all that for people with visual disabilities. But I know those things are incredibly time-consuming and expensive to produce, but wouldn't it be easier to just have some device like this that's sort of reported to you what this map looks like on the page?

Mark: Well, yeah, absolutely. And then you take that even a step further in a couple of different ways. One, maybe this thing ends up in the form of an iPad-like device and it's something portable that can be carried around so that when you walk into a facility or connecting to the WiFi and it's feeding whatever information you need to it, and then now forget just the physical room layout or where's the bathroom, now you get to the bathroom and all of a sudden – And I'll tell you this quick story when I was coming back from the USBLN conference in Los Angeles. I had a layover in Phoenix, Arizona, I was doing the get-off-the-plane, de-plane, sprint to the bathroom, and make it to the gate thing that everybody does. When I got to the bathroom, there was this woman and this guy standing there. The woman was an airport employee, but she was kind of – not in distress, but in a kind of concerned tone going, "What do I do now? What do I now? What do I now?" They're literally almost blocking the doorway to the men's room. In my mind, this kind of like stopped me in my tracks because I realized that there was something just amiss. I took in the situation and realized that she was talking to this young dude that was blind and that she had been assisting him through the airport, but now that they were in the doorway to the men's room and this guy had to go, she, as a young lady, was trying to figure out how to handle the situation. So I said to her, "Hey, if it's okay with him..." and I've just been up this USBLN Conference, so I've been talking with blind people and I was sort of at the top of my game in terms of being courteous and – not being courteous, but appropriately offering up things. So I said, "Hey ma'am, if it's okay with him, I'd be happy to escort him into the bathroom. I've got to go in there myself." He chimed in and said, "Oh, that would be great if you could do that." So he grabbed my left elbow and we shuffled him into the bathroom and I showed him around. When he was done, he went to wash his hands and stuff like that. I was showing him, "Here's the sink, here's the paper towels, here's the soap dispenser." I realized that for as much as a blind person can use their hands and feel out their environment, it's very inefficient for them to know where stuff is. For example, if the paper towels are on the left and he starts feeling on the right, he can end up in the other end of the bathroom before he finds the paper towels, you know what I mean? So that information of, "Hey, paper towels on the left, soap dispenser on the right, the sink's right in front of you" and then he's got it from there, you know what I mean, is very helpful. So this device to me, if you walk into that bathroom and all of a sudden, you can run your hands over this device and know where all those different elements are, it's incredible!

Mike: There are some big gains being made right now in interior navigation, like the way that we use GPS and drive around the roads and fly in planes and stuff like that, but there are some in-roads being made toward interior space navigation in buildings not only based on GPS information, but also supplemented with WiFi hotspots and that sort of thing. It's tricky, but it is becoming more and more possiblr. There's obviously a lot of interest in that.

Mark: Well, I'm wondering about them being tied together. You know what I mean?

Mike: Yeah, where I see a device like this working in your scenario, this was a sort of iPad-like device, I kind of see it the same way. You can imagine that these columns became pins sort of like a Braille embossed kind of thing...

Mark: A little more detail.

Mike: A little more detail, but then instead of it being all those pins that pop-up, you could cover that whole thing with some flexible membrane so that the pins that do pop up create the same feeling as a thermoformed page. So then you get like a little bit of a contour and you could feel height and a little bit and all that.

Mark: I wonder if RFID tags inside of things like soap dispensers in my scenario or – and maybe that's not the answer, but somehow identifying the location and what it is with these elements.

Mike: Yeah, I think it's a wonderful possibility. I think the only difference I was going to say is that I think I view it as more of a display device, something that you attach to whatever you use via Bluetooth or something like that.

Mark: Well, what if you've got a pair of Google Glasses on, right? It's got a camera going out there because essentially, that's what one-half of what this device is, it's camera. And then you've got your iPad or refreshable Braille deal, whatever that would be in front of you and you're simply looking and running your fingers over a changing environment. That's the thing to realize, we go back into this, it changes in real-time. That's why if you put that ball in there and you move your hands around the ball, it starts shooting around inside that cup of your hands or with the contouring of the basket. You can literally roll that ball around the basket.

Mike: Yeah, there would have to be a way to pause it.

Mark: Well yeah, I'm sure that you could. So now you're looking around the bathroom and literally a soap dispenser is shaping out or a paper towel is shaping out, the sink is shaping out and there you go and now a person without eyesight has that same ability to scan their environment.

Mike: Yeah, that's true. That's true.

Mark: You know what I mean? Like somebody with sight or at least a very close way to do it. It's funny, it's a simple little thing, but when you start to attach it to all these other technologies and things that we have going on and speculate into its use. It's got a ways to go, right? You got to get it off to a photocopier's size device. You've got to maybe make it a little more specific, but I think as a trailhead into this technology, this is fantastic, this sort of current inFORM 2.5D interface.

Mike: Yeah, I'm amazed what they're able to do with this.

Mark: Yeah. Well, what do you think, Mike? Have we covered this?

Mike: I think I may... actually go on and on about this. I see some potential things and uses. I know we don't want to bore people with an hour-long show...

Mark: ...with how excited we are of some of some crazy technology.

Mike: Yeah, like we are about with refreshable displays, yeah.

Mark: Yeah. No, really good one. Link to it. The video, you've got to watch the video. For you guys listening, you should check out the video. It's really not that long, but you'll look at it and you'll be like yeah.

Mike: Yeah.

Mark: I'd like to play with this thing.

Mike: The video is less than a minute long. I'll post a note to the little engineering article about it, which contains a video embedded in it, so you'll be able to see.

Mark: Alright! Well, very cool! Thanks, Mike.

Mike: Yeah. Thanks, Mark. Until next time...

Mark: Yeah, this is Mark Miller...

Mike: ...and this is Mike Guill...

Mark: ...reminding you to keep it accessible.

Announcer: The IAP - Interactive Accessibility Podcast brought to you by Interactive Accessibility, the accessibility experts. You can find their Access Matters Blog at Interactiveaccessibility.com/blog


Share This Post