This past week Anu Vedantham and I had the opportunity to share thoughts on library service with a broad collection of higher education professionals – from librarians and architects to administrators and consultants – at an Academic Impressions conference here in Philadelphia. With guidance from Patrick Cain, our conference director, we brought the group to tour spaces at Penn Libraries: the Education Commons, the Weigle Information Commons, the Collaborative Classroom and the Kislak Center. Continue reading Academic Impressions at Penn Libraries
The Education Commons has just acquired a few 3D printers! We have one 5th generation Makerbot, and 2 Makerbot Minis. We’ve been testing the printers and our procedures over the past week, and the printers will open for campus use on Monday, February 16. We’re excited about offering the printers for all Penn students, faculty and staff. Penn’s campus has a number of 3D printers already, including the School of Engineering’s AddLab. The printers at the Biomedical Library and here at the EC are open for any use you might be interested in.
This spring, BE 310, Bioengineering Lab II, took on a project to combine robotics and biology, using WIC’s iPads and cockroaches. Using a hardware set and an app from Backyard Brains, students in the class taught by David Meaney and Susan Margulies used their own muscles and brains to drive the muscles in cockroach legs.
When Sevile Mannickarottu first approached us with his idea, we were surprised to learn about this use for iPads. Fortunately, it was easy to install the free app on 12 of our iPads. Continue reading Backyard Brains in the Classroom
The Education Commons has been open nearly two years now, and though it’s no longer the newest library space on campus (I doubt I need to tell anyone about the new 6th floor, but check it out), we have added upgrades in recent months. Our staff can help you with all of them – myself (the EC Librarian Eric Janec), as well as our 4 interns – Dave Klimowicz, Nick Giovannangelo, Jen Hunter and Alexis Morris. (Wondering how to find us? We have a new video showing how to walk here from Van Pelt.) Continue reading New at the EC
Cloud computing has been an increasingly hot topic over the past few months. Services like Dropbox, Amazon Cloud Drive and Google Docs are seeing more and more use, and Apple is getting in on it too, announcing iCloud as one of the signature features of the iPhone 4s. Despite all of the up and coming cloud storage and applications, plenty of people are still confused about what cloud computing is. And it’s really not much of a surprise that they are, because cloud computing isn’t something new, it’s just the maturation of something old that we’re all completely familiar with. The confusion is coming from all of us trying to wrap our heads around this new, incredible leap in technology when we’ve all been using it for years already.
So, what exactly is cloud computing? Well at its simplest, it’s just the use of a remote computer for applications or storage, rather than a local computer. Instead of having the laptop in front of you run an application like Microsoft Word, you have Google’s servers run GoogleDocs. Then you save your documents to Google’s servers instead of your own hard drive, and pow, all of a sudden you’re in the cloud! It sounds good, but what exactly have you just done? You’ve launched a web page, used it, and closed it. This isn’t revolutionary, this is just the internet(which was revolutionary). The internet has always worked this way: you connect to a remote server, ask it to do something, and see the results on your screen.
Now while cloud computing isn’t new, it’s certainly more powerful than previous types of remote computing. Cloud computing is the result of increases in connection speed, server power, and storage capability. Taken together, these advances allow you to do more tasks and store more things remotely than were previously possible. When you had to connect over a 56k modem, it wasn’t possible to send and receive enough data to work on an application through your browser at the same speed that you could on a local program, and so you did it on your own computer. Now with broadband and wireless (not to mention mobile computing) expanding to more areas and constantly speeding up (Mmm, 4G), it’s suddenly possible to get a lot more done without having to download and install a program on your home computer, and this is what’s led to the explosion of cloud services.
I may have come off a little bit negative about it in this post, so I want to make it clear that I love the cloud. I just think the hype outstrips the reality. I do a ton of work with GoogleDocs, I store things remotely, and I use Facebook (also a cloud service! All your pictures and data are stored on their servers and they’re running the application) and Dropbox. I still maintain local copies of my more important work because while cloud storage does add redundancy, it also creates new areas of failure: your internet connection and the company running the service. You don’t want to have all your documents in the cloud when your internet goes down. There were also a few high-profile outages for server-based programs recently: BlackBerry Messenger went down for days, and Apple’s new Siri service also had several hours of outages over the past weeks. While not catastrophic, these type of things make me want to maintain some of my most important work locally, where at least if my hard drive fails I can actually glare directly at it and blame myself for not backing up, rather than diffusing my anger into the cloud.
If you’ve been around WIC, you’ve probably noticed that we use QR codes on a lot of our flyers and handouts. QR codes have been around for a little while, but haven’t gained a huge amount of traction. They’ve recently started to grow in popularity, both in libraries and elsewhere, but they haven’t reached the ubiquity necessary for people to start expecting them. Part of that is likely due to the fact that, while the smartphone market is growing quickly, it’s not universal yet. Not every product or service has users that have smartphones, and so they don’t need to create QR codes.
I like QR codes because they let you jump right into whatever you’re interested in. That’s a huge improvement over the old method of having to write down or take a picture of the address you’re interested in and then go to it later, once you’re at a computer. This itself is an improvement over the positively archaic practice of having to call a phone number or even physically go somewhere to find out more about something you’re curious about. However, even though QR codes are much more convenient, they’re also not quite convenient enough. In this case, I think the problems revolve more around the readers than the codes themselves. While you only need a smartphone to read a code, you also need to pull that phone out, unlock it, open your photo app, hold your camera over a specific place, and then wait on your network to load the page, and hope that page is a mobile page(ours is!), or at least has a mobile theme.
Maybe the modern world has made me too demanding, but that process annoys me, even though I do it fairly often. Luckily for my lack of patience, it seems as though there’s a new system on the horizon: NFC, or Near Field Communication. That Wikipedia page will give you an idea of some of the potential of NFC. Most of the work on NFC right now is focused on using it for payments, but I think that once it spreads, it will replace QR codes as well. NFC eliminates all the annoyance of QR codes, and even improves on what QR codes deliver. You don’t have to unlock or open your phone, much less start an app and aim your camera at a specific place. You just swipe your phone near a reader and it transmits instructions wirelessly and passively. It can instruct your phone to open a website, or have it enter a note that you can read later, and all you have to do is look at it when you’re ready to.
Of course it will take some time before this finally happens. QR codes leveraged a universal technology in phone cameras, while NFC requires a specific chip to be built into a phone when it’s created. Manufacturers are starting to build NFC-capable phones, and different organizations are starting to support NFC, but it’s still a young technology. People will need to replace their phone to use it and companies will need to adopt the technology to provide points of service, but the convenience of the process makes it an attractive one. I think enough people will want the ease of swiping your phone to get things done to ensure that NFC will grow very quickly. The technology already has Google behind it, and that’s a powerful supporter. While their main focus is currently GoogleWallet, if they can push NFC into enough places, we’ll all be able to benefit from that infrastructure when different companies use it for different reasons.
As you may have heard, George Lucas is in the process of giving the original Star Wars trilogy another facelift like he did back in 1997; this time for the Blu-Ray release of the entire set. Now, you may not all be fans of Star Wars(though I can’t imagine why not), but the news being released about this latest re-imagining of the series has a number of fans a bit upset. Most controversially, Darth Vader now screams “Noooooo!” as he throws the Emperor down the shaft at the end of the final movie.
While changes to these movies are hardly a new thing, this series is now nearly 35 years old. The various versions of the movies have blended together with no clear delineation between the original works and the edited versions. What does this have to do with WIC? Well, thinking about this phenomenon made me realize that Star Wars has essentially become a mashup of itself. New scenes are introduced, old ones deleted, dialogue shuffled, removed and inserted, different effects added and old ones replaced. Lots of mashups blend completely disparate films and media but the principle stays the same: editing and splicing media together to create a new effect from the pieces of the old.
What is the difference between editing and re-creating? Where does a product stop being a re-release and start being a new movie? Is it changing the format into HD or Blu-Ray? Remastering the special effects? Replacing puppets with CGI? Introducing new footage, adding on layers of audio? It’s clear that there’s a sliding scale, and where exactly a movie changes from a new edition or a director’s cut into something that’s new enough to be a new product is up for debate. I say that at this point the newest Star Wars movies are different enough in form and content to be considered a mashup of themselves, drawing on lots of content produced over 30 years.
So what’s the difference between George Lucas’ Star Wars and your mashup? The big things that pop to mind are his access to millions of dollars of equipment and ownership of the original footage. Those are no small things, but I’m happy to think that the man who made Star Wars is just another guy who likes to tinker around with film and see what comes out, even if what comes out often enrages the fans of the original work. And that right there is what I see as the definitive difference. A mashup creates a new work out of pieces of the old, while Lucas is redefining the original work itself with his mashup. I’d like to say that it’s just as odd when he does it as it would be if a Penn student replaced their home movies with a mashup they’d made. He’s got the right, but why?