Gmail’s Smart Reply for you

Back in 2009, Google teased users with  Gmail Autopilot, a service which would both read and respond to emails automatically. The service would get to know you by reading your emails and being responding using your personal communication style. Of course, people quickly recognized this as an infamous Google hoax.

Example Autopilot Responses
©2011 Google

This may have seemed far-fetched six years ago, but personal, automated email responses are now becoming a reality thanks to Google’s new artificial intelligence technology:

“Google just unveiled technology that’s at least moving in that direction. Using what’s called “deep learning”—a form of artificial intelligence that’s rapidly reinventing a wide range of online services—the company is beefing up its Inbox by Gmail app so that it can analyze the contents of an email and then suggest a few (very brief) responses. The idea is that you can rapidly respond to someone while on the go—without having to manually tap a fresh message into your smartphone keyboard.”

Google’s “deep learning” technology, called ‘Smart Reply,’ now allows Gmail to analyze the content of your email and suggest a few brief responses to it. In this process, composing new mail can be substituted by this new bit of Google’s artificial intelligence.

According to Google’s product Manager Alex Gawley, Smart Reply will tailor both tone and content of the email and suggest three responses. The user can still choose to either use one of them or modify them with one’s own words.

io-highlights-googlee28099s-machine-learning-smartszImage Source:

This particular feature of Gmail is a result of something called ‘Machine Learning.’ Pieces of information from all over the world are constantly fed into a neural network (a network of computers intended to represent and perform functions of neurons in the human brain) called long short-term memory (LSTM system). One half of this neural network on receiving these new pieces of information analyzes them and ‘learns’ the underlying patterns in diverse sets of phrases in the language. The second half works on generating potential responses (typically 3-6 words long), one word at a time.

For example, by feeding enough pictures of a human, the machine eventually ‘learns’ how to identify a human. However, this feature is not new. It can be thought of as an extension of the ‘suggest search’ feature in the Google Search engine, the ‘auto-complete’ feature on our phones’ texting applications, or the personal assistants, Siri or Cortana, on our phones.

Naturally, this feature depends on the amount of data input into a neural network. With only a finite amount of data, the machine’s responses can be rudimentary at best. Nevertheless, this technology is a leap in that direction.

Image Source:

Rumors are that this machine can even process jokes and suggest appropriate responses to them!

Welcome to the future! Comment here and let us know what you think. Fascinating use of technology or unnecessary AI intervention?

Google AI
Image Source:




Research Teas @ Penn Libraries Begins February 2 at 4:00 p.m.

You are invited to the Penn Libraries Research Teas, taking place Tuesdays at 4:00 p.m. in Meyerson beginning February 2nd. The Penn Libraries’ Research Teas are an opportunity to share and learn about ongoing research at Penn: the what, the why, the who, and the how. Relax, listen, and ask questions, share your own ideas, all while enjoying a cup of tea.


The February 2 Research Tea features CLIR Postdoctoral Fellow, Laura Aydelotte: Special Collections Materials In Hand and Online

Laura will discuss  the Provenance Online Project, which she directs here at the Libraries. This is a digital humanities project that crowdsources photographs of and information about ownership marks—bookplates, inscriptions, stamps, and more—all details that tell us whose hands these rare books have passed through over the centuries.   She will talk about the way this project and other special collections and digital humanities related work can be used for research and teaching.  There will also be an opportunity to get to see some of the fascinating rare books from the Kislak Center for Special Collections, Rare Books and Manuscripts.

Following weeks:

 February 16th, Professors Bethany Wiggin and Catriona MacLeod will talk about their publication in process: Un/Translatables: New Maps for German Literature

March 1st: Digital Methods for Americanists with CLIR Postdocs, Elizabeth Rodriques (Temple) and Lindsay Van Tine (Penn)

April 5: Professor Emily Wilson will talk about her work to create a new verse translation of Homer’s Odyssey.

April 12: Working with Images, Creative Commons licenses, and Fair Use with Patty Guardiola, Assistant Head, Fisher Fine Arts Library

April 19: Professor Toni Bowers will talk about her work creating scholarly editions of Samuel Richardson and P.G. Wodehouse.

Links for registering or sharing with others:

Penn Libraries Facebook event:

Wicshops: (look for Research Teas –and more information to come on Diversi-Teas).

Learn to Transcribe and Encode Early English Books!

This is an image of the title page of the Booke of Pretty Conceites.

Are you interested in early modern texts and learning more about the digital humanities? The Early Books Collective is once again looking for undergraduate students and all interested parties to collaborate with us in transcribing the 17th century text: The Booke of Pretty Conceites–very merry, and very pleasant, and good, to be read of all such as doe delight in new and merry conceites.

Join us every Wednesday from 4 to 5 p.m. in the Vitale II Media Lab, Kislak Center, 6th Floor, Van Pelt Library. No registration necessary.

Working with the Early English Books Online (the EEBO database) Text Creation Partnership (TCP), you’ll decipher and transcribe this text by learning the TEI encoding language and, thereby, cultivate a valuable skill for work in the digital humanities.

Upon completion, we will be contributing our transcribed text back to the EEBO database and the Text Creation Partnership, which is fully and freely available for anyone to use.

Side by side images of both the original letter to the reader page and the transcribed TEI code.
Experience firsthand how early books are digitized!

Join us in contributing to this important project of creating an invaluable scholarly tool!

Life with Technology at Penn: Student Research Exhibit

Dr. Rosemary Frasso, Allison Golinkoff (TA)  and graduate student research team– Qualitative Research Methods for Social Work and Public Health Professionals (SW 781) Fall 2015

As everyone trickles back in to the library this semester, take some time to walk towards the Van Pelt Collaborative Classroom (right before the WIC entrance, to the right) to see Dr. Rosemary Frasso’s graduate students’ research exhibit Life with Technology Among University of Pennsylvania Students. Dr. Frasso’s previous research exhibits include Pressure Release and Fear and Safety at Penn. I took some time this week to make my way through the exhibit and found it interesting to see how Penn students are understanding technology’s role in their lives. Here at WIC we post about tech frequently, and this past year alone we’ve discussed new ways of using social media tools, using apps for productivity and travel, and our experiences with 3d printingLife with Technology takes a more in-depth look into the complicated ways students’ lives intersect with technology that can be both useful and intrusive. The exhibit  is organized into thematic categories: Changing Times, Dependence, Disconnected, Efficiency, Health, Multitasking. Privacy, Social Connections, Ubiquitous, Unplugged, and Work and Education.

In order to decide on a topic, students used Nominal Group Technique (NGT) in order to come to a consensus representative of the group’s preferences. Interviews were then conducted using photo elicitation (first named by photographer and researcher John Collier in 1957) in which a qualitative interview is guided by photographs taken by study participants. Each student recruited one participant, an undergraduate or graduate student from Penn, and explained the study to them. The topic of the project was explained and participants were asked to “define and explore the meaning of ‘life with technology’ over the course of one week using their phones to document their exploration.” Ultimately, the research team decided together on which images and quotes to use in the exhibit and how these pieces fit into categories. Some memorable images include dried cranberries, Penn classrooms, a kitchen stove, and selfies.

From here, students will use NVivo 10 software for thematic analysis, and members of the research team will then identify salient themes, summarize findings, prepare an abstract for presentation, and a manuscript for publication. The exhibit is beautiful and engaging, so please come by and check it out at the  Van Pelt Collaborative Classroom. 

If you are interested in using NVivo software, consider joining our NVivo User Group which meets monthly with a guest presenter for each session.

How do you collaborate? Let us know!

Penn Information Systems & Computing is looking for details on the collaboration tools you use on campus and what features make them effective. Take the Poll

What makes these tools so beneficial? What effect do they have on communication?

Please take five minutes to share your experiences with file sharing, audio and video conferencing, instant messaging, and group chat.

Your input will help improve collaborative processes at Penn.

Happy NVivo Year!

nvivogroupLots of NVivo news to celebrate as we enter 2016!

Thanks to our awesome public computing support department, all the computers in Weigle and the Goldstein Electronic Classroom can once again run NVivo beautifully! Software glitches are fixed, our machines have solid-state drives that boot up faster and our network is now at 1Gig Ethernet. So come on back, and bring your friends with you!

Our NVivo User Group is off to a great start with more than 60 people on our listserv and a Canvas course for sharing databases and questions. All four sessions to date had strong attendance and handouts are posted online.

Our next NVivo Basics class will be on January 27, and our next NVivo User Group meeting on February 1 will focus on query design facilitated by Ebony Easley. We plan time for “ask an expert” consultations, so bring your team and your NVivo files along with you. On your way in, you can admire the latest student work exhibit by Rosie Frasso‘s class on how technology is changing our lives; the students used NVivo to analyze their interviews.

Symposium videos reveal our robust campus community..

Symposium 2015 LogoThanks to our dedicated team of sponsors and organizers and our powerful presenters, we have received wonderful feedback from the 2015 Engaging Students Through Technology Symposium. One of my favorite comments inspired the title for this blog post:

“The community of people doing innovative things with tech at Penn is actually quite robust, and I felt like I was reintroduced to it in a really delightful way. We could probably do a better job maintaining that community all year long.”

We brought together over 150 faculty, staff and graduate students from all twelve Penn schools for an intense program of speakers and discussions. Our student survey responses and student and faculty panel presenters inspired conversation.

As our photo album shows, our audience stayed active and engaged throughout a very long Friday. I welcome you to browse the recently-posted presenter slides and videos and share with colleagues.  Highlighted tools include Twitter (with Alain Plante and Emily Steiner), the LightBoard (with Phil Gressman), wikis (with Joe Farrell), Panopto lecture recording (with Peter Fader) and a variety of apps and web resources. The playlist below includes 22 videos!

Save the date: The 2016 Symposium will take place on Friday, October 28!