5 takeaways from Google I/O 2014

Swav Kulinski
By Swav Kulinski under Insights, Engineering 17 July 2014

On 25-26th June this year, Google I/O 2014 took over San Francisco and we decided to get together here at TAB HQ to share in the action. Thanks to our iOS friends, who lent us a good projector, we were able to stream the coverage onto a spare wall in our ground floor space, drawing together TAB’s Droid team to watch the keynotes and swap predictions for what would – and wouldn’t – be covered. Being on a busy street in Soho, London, meant we weren’t alone in staring at the big screen…

Google I/O 2014

This year, it’s fair to say Google made it big. A few announcements were expected – Android Wear, in particular. What we didn’t expect was that Google would make a move in all directions at once: from Android Wear and Android Auto, to Android TV and the release of Google Glass.

On top of this, we were introduced to the new Android UI – Material Design, Polymer – web components framework, which implements Material Design and the newest Android L SDK.

1. Android L (Preview)

It’s no secret that Android virtual machine is not the best in the world. Google decided to change that by dumping Dalvik entirely. After the just-in-time (JIT) compiler, optimisation was introduced in 2.2, the new VM is bringing improved garbage collection, and uses ahead-of time (AOT) compilation. This improves performance in two aspects – speed and battery consumption. On the down side, ART is still dragging the same 64k dex method limitations. A lot of projects are hitting this limit quickly, especially after the release of the new Play Services 5.0, which is sadly a big monolithic lib (that already takes one third of the limit).

There are a massive amount of commits regarding ART itself. On top of these, about 700 commits are related to 64bit architecture of ARM and Intel processors. On the topic of 64 bit architecture, I must say I like Google’s lean approach to the problem. Google has done this at the last possible moment (saving us from hassle) just to keep the ARMv8 bargaining card out of Apple’s hand.

2. Inside Android L SDK

It turns out that old Grandpa ListView is going to be retired. The new documentation provided with the Android L preview SDK shows the RecyclerView class. This class has an API that allows the implementation of a custom LayoutManager for placing child views on the screen. Smarter adapters delegate View creation to Presentation classes, whose responsibility is to create views and provide ViewHolders. In short, we can create smart ViewGroups, which cycle the content as we scroll, and we’re not limited by the class how we arrange it on the screen.

Another addition to the family is CardView, which finally replaces all the dodgy third party libraries we used before.

Designers will be interested in the animations between activities. In Android L, we can specify a view that will be animated during transitions between Activities. Let’s say we want Ron’s picture shown in the CardView to move and scale while we open article fragment. All we have to do here is to specify which view is the lucky one to be animated.

3. Material Design (Designers/Developers)

It was quite an interesting session that explained why providing depth to the view makes things easier, with the fundamental idea that our perception works this way already. Ultimately, we just want to leverage something which has been in our brain’s default library since version 0.1.

The new Android look introduces depth (Z) along with shadows. This way, we can read the UI faster and more easily understand what is happening, with ripple and animated shadow effects providing feedback to the user on their successful interactions. However, we need to understand that this is not a 3D UI. A new native UX is reflected in Polymer: a web component library that allows us to take a fresh look into the possibility of creating hybrid apps for Android and iOS.

Designers might also be interested in the Palette interface, which allows us to calculate dominant colours from a specified bitmap. We can now create a UI that adopts its colouring from images shown on the screen. The API calculates 16 colours we can currently choose from.

4. Android Wear

Android Wear works like an extra touch screen for your device. By default, it receives your device’s notifications (only important ones), provides some sensor information (e.g. it can unlock your phone if your wearable is nearby), and it can measure your walking distance. It can even switch its display on when it detects you’re looking at it. Most of the commands can be provided via voice recognition, which works amazingly.

5. Polymer

And finally, for me personally, Polymer is the most important revelation from Google I/O this year. Most of the web frameworks are end-to-end sandboxes largely incompatible with each other. Polymer brings the foundation on which all frameworks can be based e.g. Angular JS. The Library has been out there for almost two years, but now with Material Design support and paper components (including WebView with animating components, shadows, ripples etc.), it’s really alive and kicking. I’ve taken the liberty of trying it already, and I strongly recommend everyone take a quick look.

I’ll finish by urging you to head to the Polymer Designer page and build something – just drag and drop, folks!