The Hoesley Program is open to current sophomores and juniors who are interested in broadening their digital literacy and technology skills and fostering career connections at Penn and beyond. This year, we are accepting a cohort of around 5-10 students. Read more about our Hoesley students in related blog posts and apply online.
Take a break from studying, have some refreshments, and join your friends to watch election results in the Class of ’55 conference room (241 Van Pelt-Dietrich Library Center) on Tuesday, November 8 from 5pm to midnight. We hope to see you there!
This post is adapted from an email I wrote in response to a question about the best way of obtaining a transcription of an audio file.
Good transcriptions/captions are incredibly useful in a variety of situations, and due to ADA compliance, they’re increasingly a necessity. People usually don’t think about this ahead of time, and I try to encourage people to build captioning into research budgets and grant applications whenever possible because costs add up. The more footage you have, the more likely you’re going to have to get someone else to do it, and even just 10 hours of audio could cost you $1000 to have transcribed by a captioning service.
Some of you may be tempted to rely on YouTube’s automatic captions. By way of example, here’s a video we put up where all of the speakers speak quite clearly:
But (as of late 2016) the quality of the YouTube automatic captions—although clearly they’ve made huge progress over the years—still means that they serve no real purpose other than their comedic/entertainment value. They’re good enough only to get a very general idea of what’s going on, and that’s about it. And this is with clean audio and clear speakers with a standard American English accent.
It’s not accurate enough for ADA compliant captions or for hearing impaired people to find useful.
It’s not accurate enough for a native English speaker to watch the video with the sound off.
It’s not accurate enough for non-native English speakers to use increase comprehension or to use with automatic translation services.
It’s not accurate enough for a production transcript for an editor to find clips to use.
It’s not accurate enough to provide useful search capability.
It’s not accurate enough as an alternate way of archiving audio content.
It’s not accurate enough to use the transcriptions in a thesis, dissertation, or journal article.
It’s not accurate enough to do a qualitative analysis of the text.
It MIGHT be accurate enough for some degree of SEO, but it’s certainly not ideal.
It’s inaccurate enough that if you’re going to take these captions as a starting point and then go back and edit them, you’re not really saving yourself much time.
Inaccurate captions can also detract from the user experience because users end up focusing on the errors instead on your content.
It’s inaccurate enough that it makes it difficult to impossible to repurpose the text to other contexts (blog posts, tweets, emails, etc.).
The best transcription software out there still works best when it’s had a chance to learn a particular speaker’s voice, which takes time and means you have to correct the software as you go so it can learn from its mistakes. This is fine when the same person is transcribing their own voice over and over again, but it’s not so useful for just a handful of interviews of each speaker.
I say all of this not to put down YouTube (again, I’m actually really impressed it’s as accurate as it is) but in support of the idea of paying human beings to transcribe it for you—preferably people who are experienced in doing so, but almost any person is going to do a better job than software.
Whether you’re going to hire a service or pay an undergrad to type something up for you, some things to consider, all of which can help determine which route you take:
The fairest way to compare services is to be sure you’re paying per minute of interview, not per minute of time spent transcribing, which will vary from person to person.
Are volume discounts available?
Are educational discounts available?
Try to find a service which guarantees a certain level of accuracy (generally, it’s not going to be usable for most purposes if it’s less than about 97% accurate). Is the provided quality/level of accuracy good enough for your needs? Is it good enough to attach your name and Penn’s name to the final product?
Do you need just a transcript? Or timed captions?
Do you want an “interactive transcript” like what com does with their instructional videos?
Find out what output formats they provide. (is it just straight text in a .docx file w/ a periodic time code stuck in? Timed captions SRT? DXFP/TTML?) The degree of accuracy you need for the timing of the text will partly determine what file format you need. Some are convertible to others.
Some services will transcribe a few videos for free first to see if you’re happy with the service.
How fast is the turnaround time they offer? (Generally you pay less for slower turnaround, but it can be useful to be able to pay extra when you need it the next day) A service is going to provide much faster turnaround time than an individual can because they have many transcribers working for them.
Does your school have an existing relationship with a captioning service?
Do your captions need to be ADA compliant? (Both Penn State and Netflix have had lawsuits against them because of the lack of captioning. Check with your School/center/department to see if there’s a policy regarding captioning you’ll need to follow.)
Do you need a HIPAA compliant service or is the material otherwise sensitive or confidential?
Can you build the cost of transcribing into your research budget or grant proposal?
Do you need all of your raw footage transcribed (as you would if you were editing a documentary)? Or just the final edited version (as you would if you were simply trying to meet ADA requirements)?
Are they a Penn-approved vendor? Can you pay with a purchase order?
Do you need transcription in a language other than English? (English and Spanish are pretty easy to find, but there are services that offer transcription in many other languages as well, sometimes at a premium cost.)
As far as recommended services, I’m glad to recommend both AutomaticSync and 3Play, both of which we’ve used and both of which we’ve been very happy with.
Kaylin Raby is a junior studying Systems Engineering and is the president of Access Engineering at Penn. In this guest post, she describes what the club does and explains its mission.
Recently there has been a push to encourage science, technology, engineering and math (STEM) education in schools across the country. Science and Math are standard elements of high school curricula everywhere, and kids are exposed to technology every day of their life. However, kids often have much less experience with engineering and what it actually entails. Access Engineering seeks to change this by providing high school students with a realistic and approachable first-year undergraduate engineering curriculum.
Access Engineering’s mission is to demonstrate to Philadelphia high school students what engineering is all about: an analytical thought process and an option for a future career. We also hope to inspire and motivate students to seek out higher education in general. As Penn Engineering students, we are in a unique position to accomplish this mission. We can relate to the challenges they face as high school students. Many prospective students do not apply to engineering schools because they don’t know what engineering curricula covers or they have misconceptions about what it entails. We want to acquaint students with the various engineering majors and give actionable advice that students can use in regards to their potential college paths and engineering careers.
Access Engineering offers two weekly programs to high school students interested in learning more about engineering. We teach an introductory track, which gives students a broad introduction to many different engineering fields. This includes an introduction to the Java coding language, circuit design, robotics, an introduction to computer assisted design, app development, and prototyping parts on 3-D printers. The advanced section focuses specifically on the integration of circuitry and computer science with mechanical engineering, building upon material learned within our first-semester program.
Last semester, Access Engineering brought over seventy students to Penn every weekend to participate in our first and second semester programs. We recruit students from four main partner schools in and around Philadelphia, and we plan to expand the program to new schools each year.
We teach our lessons weekly on Saturday mornings from 10 AM – 1 PM. If you would like to know more about the club and what we do, we encourage you to visit our website. Recruitment for the fall semester begins in September- be sure to stop by the Activities Fair to speak with current volunteers about the Access Engineering experience!
Back in 2009, Google teased users with Gmail Autopilot, a service which would both read and respond to emails automatically. The service would get to know you by reading your emails and being responding using your personal communication style. Of course, people quickly recognized this as an infamous Google hoax.
Google’s “deep learning” technology, called ‘Smart Reply,’ now allows Gmail to analyze the content of your email and suggest a few brief responses to it. In this process, composing new mail can be substituted by this new bit of Google’s artificial intelligence.
According to Google’s product Manager Alex Gawley, Smart Reply will tailor both tone and content of the email and suggest three responses. The user can still choose to either use one of them or modify them with one’s own words.
This particular feature of Gmail is a result of something called ‘Machine Learning.’ Pieces of information from all over the world are constantly fed into a neural network (a network of computers intended to represent and perform functions of neurons in the human brain) called long short-term memory (LSTM system). One half of this neural network on receiving these new pieces of information analyzes them and ‘learns’ the underlying patterns in diverse sets of phrases in the language. The second half works on generating potential responses (typically 3-6 words long), one word at a time.
For example, by feeding enough pictures of a human, the machine eventually ‘learns’ how to identify a human. However, this feature is not new. It can be thought of as an extension of the ‘suggest search’ feature in the Google Search engine, the ‘auto-complete’ feature on our phones’ texting applications, or the personal assistants, Siri or Cortana, on our phones.
Naturally, this feature depends on the amount of data input into a neural network. With only a finite amount of data, the machine’s responses can be rudimentary at best. Nevertheless, this technology is a leap in that direction.
Rumors are that this machine can even process jokes and suggest appropriate responses to them!
Welcome to the future! Comment here and let us know what you think. Fascinating use of technology or unnecessary AI intervention?
You are invited to the Penn Libraries Research Teas, taking place Tuesdays at 4:00 p.m. in Meyerson beginning February 2nd. The Penn Libraries’ Research Teas are an opportunity to share and learn about ongoing research at Penn: the what, the why, the who, and the how. Relax, listen, and ask questions, share your own ideas, all while enjoying a cup of tea.
The February 2 Research Tea features CLIR Postdoctoral Fellow, Laura Aydelotte: Special Collections Materials In Hand and Online
Laura will discuss the Provenance Online Project, which she directs here at the Libraries. This is a digital humanities project that crowdsources photographs of and information about ownership marks—bookplates, inscriptions, stamps, and more—all details that tell us whose hands these rare books have passed through over the centuries. She will talk about the way this project and other special collections and digital humanities related work can be used for research and teaching. There will also be an opportunity to get to see some of the fascinating rare books from the Kislak Center for Special Collections, Rare Books and Manuscripts.
February 16th, Professors Bethany Wiggin and Catriona MacLeod will talk about their publication in process: Un/Translatables: New Maps for German Literature
March 1st: Digital Methods for Americanists with CLIR Postdocs, Elizabeth Rodriques (Temple) and Lindsay Van Tine (Penn)
April 5: Professor Emily Wilson will talk about her work to create a new verse translation of Homer’s Odyssey.
April 12: Working with Images, Creative Commons licenses, and Fair Use with Patty Guardiola, Assistant Head, Fisher Fine Arts Library
April 19: Professor Toni Bowers will talk about her work creating scholarly editions of Samuel Richardson and P.G. Wodehouse.