Posted:

Posted by Laurence Moroney, Developer Advocate

If you’ve worked with Web or cloud tech over the last 18 months, you’ll have heard about Containers and about how they let you spend more time on building software, instead of managing infrastructure. In this episode of Coffee with a Googler, we chat with Brian Dorsey into the benefits of using Containers in Google Cloud Platform for simplifying infrastructure management.

Important discussion topics covered in this episode include:

  • Containers improve the developer experience. Regardless of how large the final deployment is, they are there to make it easier for you to succeed.
  • Kubernetes -- an open source project to allow you to manage containers and fleets of containers.

Brian shares an example from Julia Ferraioli who used Containers (with Docker) to configure a Minecraft server, with many plugins, and Kubernetes to manage it.

You can learn more about Google Cloud platform, including Docker and Kubernetes at the Google Cloud Platform site.

Posted:

Originally posted on the Angular blog.

Posted by, Misko Hevery, Software Engineer, Angular

Have an existing Angular 1 application and are wondering about upgrading to Angular 2? Well, read on and learn about our plans to support incremental upgrades.

Summary

Good news!

    • We're enabling mixing of Angular 1 and Angular 2 in the same application.
    • You can mix Angular 1 and Angular 2 components in the same view.
    • Angular 1 and Angular 2 can inject services across frameworks.
    • Data binding works across frameworks.

Why Upgrade?

Angular 2 provides many benefits over Angular 1 including dramatically better performance, more powerful templating, lazy loading, simpler APIs, easier debugging, even more testable and much more. Here are a few of the highlights:

Better performance

We've focused across many scenarios to make your apps snappy. 3x to 5x faster on initial render and re-render scenarios.

    • Faster change detection through monomorphic JS calls
    • Template precompilation and reuse
    • View caching
    • Lower memory usage / VM pressure
    • Linear (blindingly-fast) scalability with observable or immutable data structures
    • Dependency injection supports incremental loading

More powerful templating

    • Removes need for many directives
    • Statically analyzable - future tools and IDEs can discover errors at development time instead of run time
    • Allows template writers to determine binding usage rather than hard-wiring in the directive definition

Future possibilities

We've decoupled Angular 2's rendering from the DOM. We are actively working on supporting the following other capabilities that this decoupling enables:

    • Server-side rendering. Enables blinding-fast initial render and web-crawler support.
    • Web Workers. Move your app and most of Angular to a Web Worker thread to keep the UI smooth and responsive at all times.
    • Native mobile UI. We're enthusiastic about supporting the Web Platform in mobile apps. At the same time, some teams want to deliver fully native UIs on their iOS and Android mobile apps.
    • Compile as build step. Angular apps parse and compile their HTML templates. We're working to speed up initial rendering by moving the compile step into your build process.

Angular 1 and 2 running together

Angular 2 offers dramatic advantages over Angular 1 in performance, simplicity, and flexibility. We're making it easy for you to take advantage of these benefits in your existing Angular 1 applications by letting you seamlessly mix in components and services from Angular 2 into a single app. By doing so, you'll be able to upgrade an application one service or component at a time over many small commits.

For example, you may have an app that looks something like the diagram below. To get your feet wet with Angular 2, you decide to upgrade the left nav to an Angular 2 component. Once you're more confident, you decide to take advantage of Angular 2's rendering speed for the scrolling area in your main content area.

For this to work, four things need to interoperate between Angular 1 and Angular 2:

    • Dependency injection
    • Component nesting
    • Transclusion
    • Change detection

To make all this possible, we're building a library named ng-upgrade. You'll include ng-upgrade and Angular 2 in your existing Angular 1 app, and you'll be able to mix and match at will.

You can find full details and pseudocode in the original upgrade design doc or read on for an overview of the details on how this works. In future posts, we'll walk through specific examples of upgrading Angular 1 code to Angular 2.

Dependency Injection

First, we need to solve for communication between parts of your application. In Angular, the most common pattern for calling another class or function is through dependency injection. Angular 1 has a single root injector, while Angular 2 has a hierarchical injector. Upgrading services one at a time implies that the two injectors need to be able to provide instances from each other.

The ng-upgrade library will automatically make all of the Angular 1 injectables available in Angular 2. This means that your Angular 1 application services can now be injected anywhere in Angular 2 components or services.

Exposing an Angular 2 service into an Angular 1 injector will also be supported, but will require that you to provide a simple mapping configuration.

The result is that services can be easily moved one at a time from Angular 1 to Angular 2 over independent commits and communicate in a mixed-environment.

Component Nesting and Transclusion

In both versions of Angular, we define a component as a directive which has its own template. For incremental migration, you'll need to be able to migrate these components one at a time. This means that ng-upgrade needs to enable components from each framework to nest within each other.

To solve this, ng-upgrade will allow you to wrap Angular 1 components in a facade so that they can be used in an Angular 2 component. Conversely, you can wrap Angular 2 components to be used in Angular 1. This will fully work with transclusion in Angular 1 and its analog of content-projection in Angular 2.

In this nested-component world, each template is fully owned by either Angular 1 or Angular 2 and will fully follow its syntax and semantics. This is not an emulation mode which mostly looks like the other, but an actual execution in each framework, dependending on the type of component. This means that components which are upgraded to Angular 2 will get all of the benefits of Angular 2, and not just better syntax.

This also means that an Angular 1 component will always use Angular 1 Dependency Injection, even when used in an Angular 2 template, and an Angular 2 component will always use Angular 2 Dependency Injection, even when used in an Angular 1 template.

Change Detection

Mixing Angular 1 and Angular 2 components implies that Angular 1 scopes and Angular 2 components are interleaved. For this reason, ng-upgrade will make sure that the change detection (Scope digest in Angular 1 and Change Detectors in Angular 2) are interleaved in the same way to maintain a predictable evaluation order of expressions.

ng-upgrade takes this into account and bridges the scope digest from Angular 1 and change detection in Angular 2 in a way that creates a single cohesive digest cycle spanning both frameworks.

Typical application upgrade process

Here is an example of what an Angular 1 project upgrade to Angular 2 may look like.

  1. Include the Angular 2 and ng-upgrade libraries with your existing application
  2. Pick a component which you would like to migrate
    1. Edit an Angular 1 directive's template to conform to Angular 2 syntax
    2. Convert the directive's controller/linking function into Angular 2 syntax/semantics
    3. Use ng-upgrade to export the directive (now a Component) as an Angular 1 component (this is needed if you wish to call the new Angular 2 component from an Angular 1 template)
  3. Pick a service which you would would like to migrate
    1. Most services should require minimal to no change.
    2. Configure the service in Angular 2
    3. (optionally) re-export the service into Angular 1 using ng-upgrade if it's still used by other parts of your Angular 1 code.
  4. Repeat doing step #2 and #3 in order convenient for your application
  5. Once no more services/components need to be converted drop the top level Angular 1 bootstrap and replace with Angular 2 bootstrap.

Note that each individual change can be checked in separately and the application continues working letting you continue to release updates as you wish.

We are not planning on adding support for allowing non-component directives to be usable on both sides. We think most of the non-component directives are not needed in Angular 2 as they are supported directly by the new template syntax (i.e. ng-click vs (click) )

Q&A

I heard Angular 2 doesn't support 2-way bindings. How will I replace them?

Actually, Angular 2 supports two way data binding and ng-model, though with slightly different syntax.

When we set out to build Angular 2 we wanted to fix issues with the Angular digest cycle. To solve this we chose to create a unidirectional data-flow for change detection. At first it was not clear to us how the two way forms data-binding of ng-model in Angular 1 fits in, but we always knew that we had to make forms in Angular 2 as simple as forms in Angular 1.

After a few iterations we managed to fix what was broken with multiple digests and still retain the power and simplicity of ng-model in Angular 1.

Two way data-binding has a new syntax: [(property-name)]="expression" to make it explicit that the expression is bound in both directions. Because for most scenarios this is just a small syntactic change we expect easy migration.

As an example, if in Angular 1 you have: <input type="text" ng-model="model.name" />

You would convert to this in Angular 2: <input type="text" [(ng-model)]="model.name" />

What languages can I use with Angular 2?

Angular 2 APIs fully support coding in today's JavaScript (ES5), the next version of JavaScript (ES6 or ES2015), TypeScript, and Dart.

While it's a perfectly fine choice to continue with today's JavaScript, we'd like to recommend that you explore ES6 and TypeScript (which is a superset of ES6) as they provide dramatic improvements to your productivity.

ES6 provides much improved syntax and intraoperative standards for common libraries like promises and modules. TypeScript gives you dramatically better code navigation, automated refactoring in IDEs, documentation, finding errors, and more.

Both ES6 and TypeScript are easy to adopt as they are supersets of today's ES5. This means that all your existing code is valid and you can add their features a little at a time.

What should I do with $watch in our codebase?

tldr; $watch expressions need to be moved into declarative annotations. Those that don't fit there should take advantage of observables (reactive programing style).

In order to gain speed and predictability, in Angular 2 you specify watch expressions declaratively. The expressions are either in HTML templates and are auto-watched (no work for you), or have to be declaratively listed in the directive annotation.

One additional benefit from this is that Angular 2 applications can be safely minified/obfuscated for smaller payload.

For interapplication communication Angular 2 offers observables (reactive programing style).

What can I do today to prepare myself for the migration?

Follow the best practices and build your application using components and services in Angular 1 as described in the AngularJS Style Guide.

Wasn't the original upgrade plan to use the new Component Router?

The upgrade plan that we announced at ng-conf 2015 was based on upgrading a whole view at a time and having the Component Router handle communication between the two versions of Angular.

The feedback we received was that while yes, this was incremental, it wasn't incremental enough. We went back and redesigned for the plan as described above.

Are there more details you can share?

Yes! In the Angular 1 to Angular 2 Upgrade Strategy design doc.

We're working on a series of upcoming posts on related topics including:

  1. Mapping your Angular 1 knowledge to Angular 2.
  2. A set of FAQs on details around Angular 2.
  3. Detailed migration guide with working code samples.

See you back here soon!

Posted:

Posted by, Ido Green, Developer Advocate

There is no higher form of user validation than having customers support your product with their wallets. However, the path to a profitable business is not necessarily an easy one. There are many strategies to pick from and a lot of little things that impact the bottom line. If you are starting a new business (or thinking how to improve the financial situation of your current startup), we recommend this course we've been working on with Udacity!

This course blends instruction with real-life examples to help you effectively develop, implement, and measure your monetization strategy. By the end of this course you will be able to:

  • Choose & implement a monetization strategy relevant to your service or product.
  • Set performance metrics & monitor the success of a strategy.
  • Know when it might be time to change methods.

Go try it at: udacity.com/course/app-monetization–ud518

We hope you will enjoy and earn from it!

Posted:

Posted by, Thomas Park, Senior Software Engineer, Google BigQuery

Many types of computations can be difficult or impossible to express in SQL. Loops, complex conditionals, and non-trivial string parsing or transformations are all common examples. What can you do when you need to perform these operations but your data lives in a SQL-based Big data tool? Is it possible to retain the convenience and speed of keeping your data in a single system, when portions of your logic are a poor fit for SQL?

Google BigQuery is a fully managed, petabyte-scale data analytics service that uses SQL as its query interface. As part of our latest BigQuery release, we are announcing support for executing user-defined functions (UDFs) over your BigQuery data. This gives you the ability to combine the convenience and accessibility of SQL with the option to use a familiar programming language, JavaScript, when SQL isn’t the right tool for the job.

How does it work?

BigQuery UDFs are similar to map functions in MapReduce. They take one row of input and produce zero or more rows of output, potentially with a different schema.

Below is a simple example that performs URL decoding. Although BigQuery provides a number of built-in functions, it does not have a built-in for decoding URL-encoded strings. However, this functionality is available in JavaScript, so we can extend BigQuery with a simple User-Defined Function to decode this type of data:


function decodeHelper(s) {
  try {
    return decodeURI(s);
  } catch (ex) {
    return s;
  }
}

// The UDF.
function urlDecode(r, emit) {
  emit({title: decodeHelper(r.title),
        requests: r.num_requests});
}

BigQuery UDFs are functions with two formal parameters. The first parameter is a variable to which each input row will be bound. The second parameter is an “emitter” function. Each time the emitter is invoked with a JavaScript object, that object will be returned as a row to the query.

In the above example, urlDecode is the UDF that will be invoked from BigQuery. It calls a helper function decodeHelper that uses JavaScript’s built-in decodeURI function to transform URL-encoded data into UTF-8.

Note the use of try / catch in decodeHelper. Data is sometimes dirty! If we encounter an error decoding a particular string for any reason, the helper returns the original, un-decoded string.

To make this function visible to BigQuery, it is necessary to include a registration call in your code that describes the function, including its input columns and output schema, and a name that you’ll use to reference the function in your SQL:


bigquery.defineFunction(
   'urlDecode',  // Name used to call the function from SQL.

   ['title', 'num_requests'],  // Input column names.

   // JSON representation of output schema.
   [{name: 'title', type: 'string'},
    {name: 'requests', type: 'integer'}],

   urlDecode  // The UDF reference.
);

The UDF can then be invoked by the name “urlDecode” in the SQL query, with a source table or subquery as an argument. The following query looks for the most-visited French Wikipedia articles from April 2015 that contain a cédille character (ç) in the title:


SELECT requests, title
FROM
  urlDecode(
    SELECT
      title, sum(requests) AS num_requests
    FROM 
      [fh-bigquery:wikipedia.pagecounts_201504]
    WHERE language = 'fr'
    GROUP EACH BY title
  )
WHERE title LIKE '%ç%'
ORDER BY requests DESC
LIMIT 100

This query processes data from a 5.6 billion row / 380 Gb dataset and generally runs in less than two minutes. The cost? About $1.37, at the time of this writing.

To use a UDF in a query, it must be described via UserDefinedFunctionResource elements in your JobConfiguration request. UserDefinedFunctionResource elements can either contain inline JavaScript code or pointers to code files stored in Google Cloud Storage.

Under the hood

JavaScript UDFs are executed on instances of Google V8 running on Google servers. Your code runs close to your data in order to minimize added latency.

You don’t have to worry about provisioning hardware or managing pipelines to deal with data import / export. BigQuery automatically scales with the size of the data being queried in order to provide good query performance.

In addition, you only pay for what you use - there is no need to forecast usage or pre-purchase resources.

Developing your function

Interested in developing your JavaScript UDF without running up your BigQuery bill? Here is a simple browser-based widget that allows you to test and debug UDFs.

Note that not all JavaScript functionality supported in the browser is available in BigQuery. For example, anything related to the browser DOM is unsupported, including Window and Document objects, and any functions that require them, such as atob() / btoa().

Tips and tricks

Pre-filter input

In our URL-decoding example, we are passing a subquery as the input to urlDecode rather than the full table. Why?

There are about 5.6 billion rows in [fh-bigquery:wikipedia.pagecounts_201504]. However, one of the query predicates will filter the input data down to the rows where language is “fr” (French) - this is about 262 million rows. If we ran the UDF over the entire table and did the language and cédille filtering in a single WHERE clause, that would cause the JavaScript framework to process over 21 times more rows than it would with the filtered subquery. This equates to a lot of CPU cycles wasted doing unnecessary data conversion and marshalling.

If your input can easily be filtered down before invoking a UDF by using native SQL predicates, doing so will usually lead to a faster (and potentially cheaper) query.

Avoid persistent mutable state

You must not store and access mutable state across UDF execution for different rows. The following contrived example illustrates this error:


// myCode.js
var numRows = 0;

function dontDoThis(r, emit) {
  emit(rowCount: ++numRows);
}

// The query.
SELECT max(rowCount) FROM dontDoThis(myTable);

This is a problem because BigQuery will shard your query across multiple nodes, each of which has independent V8 instances and will therefore accumulate separate values for numRows.

Expand select *

You cannot execute SELECT * FROM urlDecode(...) at this time; you must explicitly list the columns being selected from the UDF: select requests, title from urlDecode(...)

For more information about BigQuery User-Defined Functions, see the full feature documentation.

Posted:

Posted by Laurence Moroney, Developer Advocate

In this episode of Coffee With a Googler, Laurence meets with Developer Advocate Timothy Jordan to talk about all things Ubiquitous Computing at Google. Learn about the platforms and services that help developers reach their users wherever it makes sense.

We discuss Brillo, which extends the Android Platform to 'Internet of Things' embedded devices, as well as Weave, which is a services layer that helps all those devices work together seamlessly.

We also chat about beacons and how they can give context to the world around you, making the user experience simpler. Traditionally, users need to tell you about their location, and other types of context. But with beacons, the environment can speak to you. When it comes to developing for beacons, Timothy introduces us to Eddystone, a protocol specification for BlueTooth Low Energy (BLE) beacons, the Proximity Beacon API that allows developers to register a beacon and associate data with it, and the Nearby Messages API which helps your app 'sight' and retrieve data about nearby beacons.

Timothy and his team have produced a new Udacity series on ubiquitous computing that you can access for free! Take the course to learn more about ubiquitous computing, the design paradigms involved, and the technical specifics for extending to Android Wear, Google Cast, Android TV, and Android Auto.

Also, don't forget to join us for a ubiquitous computing summit on November 9th & 10th in San Francisco. Sign up here and we'll keep you updated.

Posted:

Posted by Larry Yang, Lead Product Manager, Project Tango

At Google I/O, we showed the world many of the cool things you can do with Project Tango. Now you can experience it yourself by downloading these apps on Google Play onto your Project Tango Tablet Development Kit.

A few examples of creative experiences include:

MeasureIt is a sample application that shows how easy it is to measure general distances. Just point a Project Tango device at two or more points. No tape measures and step ladders required.

Constructor is a sample 3D content creation tool where you can scan a room and save the scan for further use.

Tangosaurs lets you walk around and dig up hidden fossils that unlock a portal into a virtual dinosaur world.

Tango Village and Multiplayer VR are simple apps that demonstrate how Project Tango’s motion tracking enables you to walk around VR worlds without requiring an input device.

Tango Blaster lets you blast swarms of robots in a virtual world, and can even work with the Tango device mounted on a toy gun.

We also showed a few partner apps that are also now available in Google Play. Break A Leg is a fun VR experience where you’re a magician performing tricks on stage.

SideKick’s Castle Defender uses Project Tango’s depth perception capability to place a virtual world onto a physical playing surface.

Defective Studio’s VRMT is a world-building sandbox designed to let anyone create, collaborate on, and share their own virtual worlds and experiences. VRMT gives you libraries of props and intuitive tools, to make the virtual creation process as streamlined as possible.

We hope these applications inspire you to use Project Tango’s motion tracking, area learning and depth perception technologies to create 3D experiences. We encourage you to explore the physical space around the user, including precise navigation without GPS, windows into virtual 3D worlds, measurement of spaces, and games that know where they are in the room and what’s around them.

As we mentioned in our previous post, Project Tango Tablet Development Kits will go on sale in the Google Store in Denmark, Finland, France, Germany, Ireland, Italy, Norway, Sweden, Switzerland and the United Kingdom starting August 26.

We have a lot more to share over the coming months! Sign-up for our monthly newsletter to keep up with the latest news. Connect with the 5,000 other developers in our Google+ community. Get help from other developers by using the Project Tango tag in Stack Overflow. See what others are creating on our YouTube channel. And share your story on Twitter with #ProjectTango.

Join us on our journey.

Posted:

Posted by Taylor Savage, Product Manager

We’re excited to announce that the full speaker list and talk schedule has been released for the first ever Polymer Summit! Find the latest details on our newly launched site here. Look forward to talks about topics like building full apps with Polymer, Polymer and ES6, adaptive UI with Material Design, and performance patterns in Polymer.

The Polymer Summit will start on Monday, September 14th with an evening of Code Labs, followed by a full day of talks on Tuesday, September 15th. All of this will be happening at the Muziekgebouw aan ‘t IJ, right on the IJ river in downtown Amsterdam. All tickets to the summit were claimed on the first day, but you can sign up for the waitlist to be notified, should any more tickets become available.

Can’t make it to the summit? Sign up here if you’d like to receive updates on the livestream and tune in live on September 15th on polymer-project.org/summit. We’ll also be publishing all of the talks as videos on the Google Developers YouTube Channel.

Posted:

Posted by Hoi Lam, Developer Advocate

If your users’ devices know where they are in the world – the place that they’re at, or the objects they’re near – then your app can adapt or deliver helpful information when it matters most. Beacons are a great way to explicitly label the real-world locations and contexts, but how does your app get the message that it’s at platform 9, instead of the shopping mall or that the user is standing in front of a food truck, rather than just hanging out in the parking lot?

With the Google beacon platform, you can associate information with registered beacons by using attachments in Proximity Beacon API, and serve those attachments back to users’ devices as messages via the Nearby Messages API. In this blog post, we will focus on how we can use attachments and messages most effectively, making our apps more context-aware.

Think per message, not per beacon

Suppose you are creating an app for a large train station. You’ll want to provide different information to the user who just arrived and is looking for the ticket machine, as opposed to the user who just wants to know where to stand to be the closest to her reserved seat. In this instance, you’ll want more than one beacon to label important places, such as the platform, entrance hall and waiting area. Some of the attachments for each beacon will be the same (e.g. the station name), others will be different (e.g. platform number). This is where the design of Proximity Beacon API, and the Nearby Messages API in Android and iOS helps you out.

When your app retrieves the beacon attachments via the Nearby Messages API, each attachment will appear as an individual message, not grouped by beacon. In addition, Nearby Messages will automatically de-duplicate any attachments (even if they come from different beacons). So the situation looks like this:

This design has several advantages:

  • It abstracts the API away from implementation (beacon in this case), so if in the future we have other kinds of devices which send out messages, we can adopt them easily.
  • Built in deduplication means that you do not need to build your own to react to the same message, such as the station name in the above example.
  • You can add finer grained context messages later on, without re-deploying.

In designing your beacon user experience, think about the context of your user, the places and objects that are important for your app, and then label those places. The Proximity Beacon API makes beacon management easy, and Nearby Messages API abstract the hardware away, allowing you to focus on creating relevant and timely experiences. The beacons themselves should be transparent to the user.

Using beacon attachments with external resources

In most cases, the data you store in attachments will be self-contained and will not need to refer to an external database. However, there are several exceptions where you might want to keep some data separately:

  • Large data items such as pictures and videos.
  • Where the data resides on a third party database system that you do not control.
  • Confidential or sensitive data that should not be stored in beacon attachments.
  • If you run a proprietary authentication system that relies on your own database.

In these cases, you might need to use a more generic identifier in the beacon attachment to lookup the relevant data from your infrastructure.

Combining the virtual and the real worlds

With beacons, we have an opportunity to delight users by connecting the virtual world of personalization and contextual awareness with real world places and things that matter most. Through attachments, the Google beacon platform delivers a much richer context for your app that goes beyond the beacon identifier and enables your apps to better serve your users. Let’s build the apps that connect the two worlds!

Posted:

Posted by Laurence Moroney, Developer Advocate

In this week’s Coffee with a Googler, we’re joined by Heidi Dohse from the Google Cloud team to talk about how she saved her own life through technology. At Google she works on the Cloud Platform team that supports our genomics work, and has a passion for the future of the delivery of health care.

When she was a child, Heidi Dohse had an erratic heartbeat, but, without knowing anything about it, she just ignored it. As a teen she became a competitive windsurfer and skier, and as part of a surgery on her knee between seasons, she had an EKG and discovered that her heart was beating irregularly at 270bpm.

She had an experimental surgery and was immediately given a pacemaker, and became a young heart patient, expecting not to live much longer. Years later, Heidi is now on her 7th pacemaker, but it doesn’t stop her from staying active as an athlete. She’s a competitive cyclist, often racing hundreds of miles, and climbing tens of thousands of feet as she races.

At the core of all this is her use of technology. She has carefully managed her health by collecting and analyzing the data from her pacemaker. The data goes beyond just heartbeat, and includes things such as the gyroscope, oxygen utilization, muscle stimulation and electrical impulses, she can pro-actively manage her health.

It’s the future of health care -- instead of seeing a doctor for an EKG every few months, with this technology and other wearables, one can constantly check their health data, and know ahead of time if there would be any health issues. One can proactively ensure their health, and pre-empt any health issues.

Learn more in the video.

Posted:

Posted by Laurence Moroney, Developer Advocate

Over the past few months, we’ve been bringing you 'Coffee with a Googler', giving you a peek at people working on cool stuff that you might find inspirational and useful. Starting with this week’s episode, we’re going to accompany each video with a short post for more details, while also helping you make the most of the tech highlighted in the video.

This week we meet with MacDuff Hughes from the Google Translate team. Google Translate uses statistics based translation. By finding very large numbers of examples of translations from one language to another, it uses statistics to see how various phrases are treated, so it can make reasonable estimates at the correct phrases that are natural sounding in the target language. For common phrases, there are many candidate translations, so the engine converts them within the context of the passage that the phrase is in. Images can also be translated. When you point your mobile device at printed text and it will translate to the preferred for you.

Translate is adding languages all the time, and part of its mission is to serve languages that are less frequently used such as Gaelic, Welsh or Maori, in the same manner as the more commonly used ones, such as English, Spanish and French. To this end, Translate supports more than 90 languages. In the quest to constantly improve translation, the technology provides a way for the community to validate translations, and this is particularly used by less commonly used translations, effectively helping them to grow and thrive. It also enhances the machine translation by having people involved too.

You can learn more about Google Translate, and the translate app here.

Developers have a couple of options to use translate:

  • The Free Website Translate plugin that you can add to your site and have translations available immediately.
  • The Cloud-based Translate API that you can use to build apps or sites that have translation in them.

Watch the episode here: