Quantcast
Channel: 懒得折腾
Viewing all 764 articles
Browse latest View live

About React

$
0
0

About React

============================
official
============================

React official website
http://facebook.github.io/react/

============================
Quick Start
============================

React in 7 Minutes
https://egghead.io/lessons/react-react-in-7-minutes

============================
Community
============================

https://www.facebook.com/groups/reactjs.tw/
https://www.facebook.com/groups/228321510706889/

============================
Books
============================

Developing a React Edge: The JavaScript Library for User Interfaces
https://www.safaribooksonline.com/library/view/developing-a-react/9781939902122/

Awesome React Book
https://github.com/enaqx/awesome-react#books

React book
http://www.reactbook.org

============================
Slides
============================
React Architecture
https://speakerdeck.com/vjeux/oscon-react-architecture

ReactJS: Keep Simple. Everything can be a component!
https://speakerdeck.com/pedronauck/reactjs-keep-simple-everything-can-be-a-component

React/Flux in Action 實戰經驗分享
https://speakerdeck.com/coodoo/flux-in-action-shi-zhan-jing-yan-fen-xiang

React slides in speakerdeck
https://speakerdeck.com/search?utf8=%E2%9C%93&q=React

============================
react training
============================
https://egghead.io/technologies/react
https://egghead.io/series/react-flux-architecure

============================
npm packages for react
============================

react npm
https://www.npmjs.com/package/react

react components
http://react-components.com/

============================
Articles
============================

Thinking in React
http://facebook.github.io/react/blog/2013/11/05/thinking-in-react.html

Learning React.js: Getting Started and Concepts
https://scotch.io/tutorials/learning-react-getting-started-and-concepts

The Future of JavaScript MVC Frameworks
http://swannodette.github.io/2013/12/17/the-future-of-javascript-mvcs/

TWO react tips
https://medium.com/@dan_abramov/two-weird-tricks-that-fix-react-7cf9bbdef375

Advanced Performance
http://facebook.github.io/react/docs/advanced-performance.html#immutable-js-to-the-rescue

============================
Flux
============================

What is the Flux Application Architecture?
https://medium.com/brigade-engineering/what-is-the-flux-application-architecture-b57ebca85b9e

The Flux Quick Start Guide
http://www.jackcallister.com/2015/02/26/the-flux-quick-start-guide.html

How can React and Flux help us create better Angular applications?
https://medium.com/@gilbox/how-can-react-and-flux-help-us-create-better-stronger-faster-angular-applications-639247898fb

============================
Video
============================

Hacker Way: Rethinking Web App Development at Facebook
https://www.youtube.com/watch?v=nYkdrAPrdcw&list=PLb0IAmt7-GS188xDYE-u1ShQmFFGbrk0v

React.js Conf 2015 – Making your app fast with high-performance components
https://www.youtube.com/watch?v=KYzlpRvWZ6c#t=1326

React.js Conf 2015
https://www.youtube.com/playlist?list=PLb0IAmt7-GS1cbw4qonlQztYV1TAW0sCr

Community Round-up #24
http://facebook.github.io/react/blog/2014/11/25/community-roundup-24.html

react js 2014 keynote
https://code.facebook.com/videos/786462671439502/react-js-conf-2015-keynote-introducing-react-native-/

============================
Addition resource
============================

http://www.reddit.com/r/reactjs/

============================
conference
============================
react 2014
http://reactconf.com/

react.js conf
http://conf.reactjs.com/

============================
Example
============================

Sample mobile application with react and cordova
http://coenraets.org/blog/2014/12/sample-mobile-application-with-react-and-cordova/

flux
https://github.com/facebook/flux/tree/master/examples

example for ebook (awesome react)
https://github.com/enaqx/awesome-react/tree/master/examples

============================
Developer tools
============================

React Developer Tools
https://chrome.google.com/webstore/detail/react-developer-tools/fmkadmapgofadopljbjfkapdkoienihi
https://www.youtube.com/watch?v=Cey7BS6dE0M&list=PLAq3rthfTjp7mY6bNsLds5_FA4JVfRXpl&index=3

sublime with react pulugin
http://www.nitinh.com/2015/02/setting-sublime-text-react-jsx-development/

react sinpplet
https://github.com/reactjs/sublime-react

 
 
 
Updated @2015.03.23


A Beginners Guide to Mobile Development with Meteor

$
0
0

A Beginners Guide to Mobile Development with Meteor

Tweet
Out of the box, the Meteor JavaScript framework includes Cordova,

“a set of device APIs that allow a mobile app developer to access native device functions such as the camera or accelerometer from JavaScript”.

If you’re a web developer who wants to release their work on iOS and Android (while harnessing the features of those platforms), you don’t have to learn a new language or entirely new concepts. You just need a basic grasp of Meteor, and then a basic grasp of details specific to mobile development.

Step #1: Prepare for mobile development with Meteor.

Obviously, you’ll need to install Meteor on your computer if you’re looking to develop with it. If it’s not installed, enter this command into the terminal:

1
curl https://install.meteor.com/ | sh

You’ll need a basic grasp of Meteor, so either check out the “Learning Resources” section of the official website, or the book I wrote for beginners.

To develop for iOS, a copy of Xcode needs to be installed on your system. This can be downloaded for freefrom the Mac App Store.

Step #2: Add mobile support to a project.

Cordova is included with Meteor itself but has to be manually added to any particular Meteor project. This avoids bloating every project with code it may not need. You add support for Cordova by adding specific platforms.

For example, to add support for iOS, enter the following into the terminal:

1
meteor add-platform ios

Or to add support for Android, enter the following into the terminal:

1
meteor add-platform android

When adding support for Android, you’ll be prompted to install any relevant software that is not already installed.

Step #3: Create a mobile configuration file.

Within your project folder, create a mobile-config.js file. Inside this file, we’re able to set a number of configuration options for the mobile portion of the application, including:

  • Meta-data, like the application name and description.
  • Preferences, like the default orientation of the application.
  • Additional preferences for specific Cordova plugins.

You can see an example of this configuration in the official Meteor docs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// This section sets up some basic app metadata,
// the entire section is optional.
App.info({
  id: 'com.example.matt.uber',
  name: 'über',
  description: 'Get über power in one button click',
  author: 'Matt Development Group',
  email: 'contact@example.com',
  website: 'http://example.com'
});
// Set up resources such as icons and launch screens.
App.icons({
  'iphone': 'icons/icon-60.png',
  'iphone_2x': 'icons/icon-60@2x.png',
  // ... more screen sizes and platforms ...
});
App.launchScreens({
  'iphone': 'splash/Default~iphone.png',
  'iphone_2x': 'splash/Default@2x~iphone.png',
  // ... more screen sizes and platforms ...
});
// Set PhoneGap/Cordova preferences
App.setPreference('BackgroundColor', '0xff0000ff');
App.setPreference('HideKeyboardFormAccessoryBar', true);
// Pass preferences for a particular PhoneGap/Cordova plugin
App.configurePlugin('com.phonegap.plugins.facebookconnect', {
  APP_ID: '1234567890',
  API_KEY: 'supersecretapikey'
});

For a full run-down of the available options, check out the “The config.xml File” section of the Cordova documentation.

Step #4: Write Cordova-only Code.

By making mobile applications with Meteor, you can write the majority of the functionality once. Most of the code will work across platforms. But not all code should run across platforms and in the same way we can control whether or not certain code runs on the client or the server with the isClient and isServerconditionals, there’s also a isCordova conditional:

1
2
3
if(Meteor.isCordova){
    // code goes here
}

The above block of code will only run if it’s being executed within a Cordova mobile environment. (We won’t use this code in this particular tutorial but it won’t take long to find a situation where it comes in handy.)

Step #5: Add mobile packages to your application.

Currently, there are three official Meteor packages that make it easy to add mobile features to your application:

  • Camera, for taking photos on either mobile devices or desktop computers.
  • Geolocation, for tracking the user’s location with a device’s GPS.
  • Reload on Resume, which can notify users when an update is available for the application and encourage them to restart to see the changes.

Further, unofficial packages can be found on atmospherejs.com.

The links above provide documentation on how to use all of these packages and they’re simple enough that you can probably understand them without further explanation. As an example, let’s play around with the “Camera” package.

What we’ll do is create a button that, when clicked, opens the camera on the user’s device (Android, iOS, or desktop) and allows them to take a photo. If they take a photo, that data will be returned to the application and we’ll be able to do whatever we want with the picture.

First, add the “Camera” package to the project:

1
meteor add mdg:camera

Then delete the contents of your project’s default HTML file and replace it with the following:

1
2
3
4
5
6
7
8
9
10
<head>
  <title>Camera Example</title>
</head>
<body>
  {{> takePhoto}}
</body>
<template name="takePhoto">
  <p><input type="button" class="capture" value="Take Photo"></p>
</template>

Here, we’re creating an interface that contains a “Take Photo” button.

Inside the default JavaScript file, delete the current contents and replace it with the following:

1
2
3
4
5
6
7
if(Meteor.isClient){
  Template.takePhoto.events({
    'click .capture': function(){
      console.log("Button clicked.");
    }
  });
}

Because of this event we’ve now created, a message will appear inside the JavaScript Console whenever the button is clicked (or in the case of a smart phone, whenever the button is tapped).

Within this event, write the following:

1
MeteorCamera.getPicture();

This is the function that’s built into the “Camera” package that allows us to tap into the user’s hardware to capture a photo. It accepts two parameters:

  • Options, such as setting the width and height for the photo.
  • A callback function, for retrieving the data of the photo.

Just for the moment, we won’t pass through any options:

1
MeteorCamera.getPicture({});

But we will pass through a callback function as the second parameter:

1
2
3
MeteorCamera.getPicture({}, function(error, data){
  // something goes here
});

Because of this callback function, we can now retrieve any errors, along with the data of the captured photo. To see this in action, use a log statement:

1
2
3
MeteorCamera.getPicture({}, function(error, data){
  console.log(data);
});

Test the application on a webcam-enabled computer and notice that a URL appears in the Console after you’ve captured a photo. We can use this URL to embed the photo within the interface.

First, store the image data inside a session:

1
2
3
MeteorCamera.getPicture({}, function(error, data){
  Session.set('photo', data);
});

Then create a helpers block for the “takePhoto” template:

1
2
3
4
5
Template.takePhoto.helpers({
  'photo': function(){
    /// code goes here
  }
});

Here, I’ve created a “photo” helper that we’ll embed in our template in a moment, but we’ll need to return the value of the “photo” session:

1
2
3
4
5
Template.takePhoto.helpers({
  'photo': function(){
    return Session.get('photo');
  }
});

Then, in the “takePhoto” template, simply reference this helper:

1
2
3
4
<template name="takePhoto">
  <p><input type="button" class="capture" value="Take Photo"></p>
  <p>{{photo}}</p>
</template>

Now when we capture a photo, a similar string from before will appear inside the interface, and that string will work perfectly fine within the src attribute of an img tag:

1
2
3
4
<template name="takePhoto">
  <p><input type="button" class="capture" value="Take Photo"></p>
  <p><img src="{{photo}}"></p>
</template>

But of course, we’re only executing this code on a computer at the moment, when what we really want is to execute it within a mobile application.

Step #6: Test your application.

You can run the application within the iOS simulator by enter the following command into the command line:

1
meteor run ios

Note that the photo feature won’t actually work within the iOS simulator. It’ll work on the phone itself, and in the Android simulator, and in a desktop browser, but not in the iOS simulator. (You can, however, use the other mobile packages, so I’d suggest playing around with them.)

To run the application within the Android simulator, use this command:

1
meteor run android

If you haven’t yet used the meteor add-platform android command, you will have so install some additional software, but the terminal will guide you through this process.

Conclusion

In this tutorial, we’ve only covered the basics of creating a mobile-friendly application with Meteor, but I hope it’s been enough to entice you to dig further. Meteor is a wonderfully fun framework and, while building mobile applications natively might make more sense in many cases, the Cordova integration nevertheless provides an elegant option for people who aren’t interested in learning a whole other technology.


VELOCITY, FRAMEWORKS & PLUGINS

$
0
0

VELOCITY, FRAMEWORKS & PLUGINS

In this chapter, you will learn:

  • The thinking behind Velocity
  • Changes to the development workflow
  • How it works
  • Choosing the right framework(s)

WHAT IS VELOCITY?

Velocity was born out of a meeting between Mike Risse, Adrian Lanning, Joshua Owens, Abigail Watson, Robert Dickert and myself, each of us having played our own roles in the Meteor testing story. When we came together, we proposed the following goals for a unified testing framework:

  • Simple installation
  • An easy Meteor-way workflow
  • A one stop shop for all Meteor testing
  • Flexible for the community to evolve

And that’s exactly what happened. The Meteor Development Group saw the above and deemed Velocity as the official testing framework for Meteor 1.0. See the video of our intro talk here.

Simple Installation

Assuming you want to test using Mike Risse’s (excellent) Mocha-web:

$ meteor add mike:mocha

You can now get started, or you can add more test frameworks, plugins and reporters as you will shortly see.

It so happens that mike:mocha is a test framework that has a dependency on both velocity:core and on velocity:html-reporter, so you don’t need to install any other packages.

A Reactive Test Runner

When installed, the velocity:core package monitors any files that change under the /tests directory in your project and/or detects when Meteor restarts. When one (or both) of these things happen, the package informs the test framework(s) to re-run the tests. Once the framework(s) have run their tests, they report back to Velocity which publishes results. Reporters observe these results and make them accessible to you. All of this happens reactively whilst you are developing.

If you’re familiar with Karma, you’ll notice it has some similarities to Velocity. In fact, Velocity’s architecture is an evolution of RTD, which is also a test runner and was built on top of Karma. Velocity, however, was written from the ground up with Meteor’s inner workings in mind.

Unified Testing

Prior to Velocity, the main testing players were TinyTest, Mocha-web, Laika and RTD, and you were left to choose between them, oftentimes having to give up one feature to gain another. With Velocity, you no longer have to make that choice and can use a combination of frameworks. Once you have opted to test with Mocha-web for integration testing, for example, you would be able to simultaneously include Tinytest for package testing.

Also, plugins that were only available in one framework, such as linting or code coverage, can now be shared. These are big wins as they allow you to collect and utilize the best possible toolset for your project.

Extensible Framework

Since Velocity is the test runner, it allows package authors to write separate frameworks and use a common API amongst them. The end product is a consistent user journey for you as the developer.

Much like many Meteor packages are wrappers around other NPM packages, Velocity frameworks are currently wrappers around existing testing technologies- with a lot of plumbing to make them work reactively.

There are a lot of tried and tested JS testing frameworks that are potentials for porting to Meteor, like CasperJS and Chai, and even non-JS frameworks can be integrated, like JMeter and more.

The scope of this book won’t cover how to write Velocity frameworks and plugins, but hopefully you can see the potential that Velocity has opened up for testing Meteor applications.

If you would like to get involved, you can join the Velocity-core group.

GETTING STARTED

Please note: This book is being written while Velocity is still under heavy development, and as such, Velocity is buggy. As a member of the Velocity-core team, I’ll be fixing as many of these errors as possible, and they should be resolved by the time this book is completed. Please do report any issues you encounter.

The New Testing & Development Workflow

Let’s start with the Meteor TODO’s app in Meteor:

# Create the example and go into it
$ meteor create --example todos
$ cd todos

# Add the Mocha testing framework from Mike Risse
$ meteor add mike:mocha

# Start testing (and coding)
$ meteor

You’ll see the app running on http://localhost:3000 and you’ll also notice a little dot in the top right corner:

VelocityGreenDot

Clicking this dot will reveal the following screen:

AddSampleTests

Since you have just started, you don’t yet have any tests. The html-reporter has a feature that detects the non-existence of test files and allows you to create sample tests from that framework. When the “Add mocha sample tests” button is clicked, the following files should appear in the project directory:

/tests
/tests/mocha/client/sampleClientTest.js
/tests/mocha/server/sampleServerTest.js

Notice that this framework has opted to place all its tests under the /tests/mocha directory.

You will also notice the html-reporter changes to this:

MochaPassingTests

The results of the sample tests are displayed, and here they are shown as passing. Now you can collapse the reporter by clicking the dot again, as seen here:

VelocityGreenDot

The green dot in the corner is letting you know that all the tests are currently passing. Let’s see what happens if you break a test. Edit the file sampleClientTest.js, change the line chai.assert.equal(5,5); to chai.assert.equal(5,6); and save it. You should quickly see this:

VelocityRedDot

The dot is red and pulsating, letting you know there’s something wrong. Clicking it will show the details of the failure:

MochaFailingTests

The failing test is reported with the stack trace.

Now, for fun, let’s return the test back to chai.assert.equal(5,5); and save it whilst the html-reporter is open. The report will reactively return to a green state.

VelocityGreenDot

This is how Velocity adds testing to the Meteor development workflow and makes it easy to work with test-first workflows such as TDD (Test Driven Development ) and BDD (Behaviour Driven Development). Although you are using the html-reporter here, there is nothing stopping you from switching to another reporter. At this time, not many other options exist, but you can expect some to surface in the near future such as audible reporters, system notifications, and console printers.

The Html-Reporter Buttons

Additional information to help users better understand errors and tests is made available through the buttons on the left.

“Show passing tests”

By default, the reporter shows an aggregate pass result if all tests are passing, and it only expands the failing tests. This button opens all the results at once as shown below:

VelocityShowPassingTests

“Show logs”

Velocity exposes an API for frameworks to post their logs. This button may reveal more details about the error if a framework uses Velocity’s logging feature.

VelocityShowLogs

“Show files”

Velocity watches files under the /tests directory on the behalf of frameworks. This button shows you what Velocity has detected and is useful for tracking down issues with tests not running as expected, such as a file not being named as a framework expects it to be.

VelocityShowFiles

“Show iframe”

Mocha is an integration testing framework. The client portion of this integration testing runs inside an iframe. This button reveals the iframe. This is also a peek into the inner workings of Velocity. To understand the need for the iframe, you need to take a peak under the hood.

VelocityShowIFrame

UNDER THE HOOD

Mirror Mirror

Tests are necessarily destructive. That is, in the setup, execute and verify stages, they can clear databases and add or remove data. For tests to be able to do this without being affected by user actions, and for a user to continue being able to develop an application uninterrupted, Velocity creates a mirror of the running application and tests using this mirror.

The mirror is a physical copy running an entirely separate Meteor command on a different port with a different database. The copy is made through an rsync process that happens whenever the main app restarts. This in turn triggers the mirror app to restart.

Frameworks that require client testing and need to access the mirror are currently using either iframes to run the tests on or are instantiating a browser, headless or otherwise.

Mirrors are another area that are currently in flux within Velocity. The concept is valid, though its implementation is not the final solution.

Lifecycle and Collections

Velocity has a lifecycle that makes some heavy use of collections. The lifecycle works as follows:

  1. Velocity and installed frameworks are packages that start with your app.
  2. Frameworks register themselves with Velocity and provide options. A particular option worthy of note is the regex used to match files in the /tests directory.
  3. Velocity starts a mirror or mirrors (based on the framework options above) and updates metadata about each mirror in the VelocityMirrors collection. Some frameworks wait for this information to be present before they commence testing.
  4. Velocity’s file watcher monitors the /tests directory and inserts a document containing the path, as well as the target framework, into VelocityTestFiles.
  5. The mirror’s source files are synchronized from the main app, and any fixtures are copied to the mirror. In some cases, the test frameworks copy the tests themselves into the mirror. These changes cause the mirror to restart, since the mirror is a separate Meteor application that has its own file-watching capability.
  6. Frameworks subscribe to or observe the VelocityTestFiles collection and filter documents that have theframework field set to the framework’s name. If any file additions/removals/modifications happen, the framework may choose to run only the changed tests, or it may re-run all of them. In either case, when the testing run is finished, the frameworks post their results back to Velocity. Frameworks may also choose to post their log entries to Velocity.
  7. Velocity inserts the results and other useful metadata (such as the framework name and any exceptions) into theVelocityTestReports collection. It also updates the VelocityAggregateReports collection which is used to store the completion status of frameworks.
  8. Whilst the framework ran the tests, any logs that were submitted to Velocity are added to the VelocityLogscollection, again with relevant metadata about the framework and test run.
  9. Reporters subscribe to or observe the VelocityTestReports, VelocityAggregateReports and VelocityLogscollections and present the information they need.
  10. A user changes a file in the application code or the some test files. The cycle repeats from step 4.

Take-Home Notes:

  • Velocity is an OpenSource initiative by the community for the community
  • Velocity is supported by the MDG and is the official testing solution for Meteor 1.0
  • Test frameworks run the actual tests and they use Velocity as their core
  • Mirrors are a copy of your application, running on a different port

THE COMMUNITY

There are three players in the Velocity community: the core team, test package authors, and users. The core team creates and maintains the core packages to support package authors, whilst package authors create and maintain their own frameworks and plugins. The users benefit!

The Core Packages

The Velocity GitHub organization is where you will find the core packages. These are:

velocity:core
This is the core test runner. All test frameworks have a dependency on this package.

velocity-ci
This package provides continuous integration support for Velocity. It’s an NPM module that launches Velocity and runs the client tests using PhantomJS.

velocity:html-reporter
Adding this package provides you with an in-app reporter of test results, logs, and other useful information. You can see this in the Getting Started section of this chapter.

The Available Test Frameworks and Plugins

Below is a list of the currently available Velocity packages. These are frameworks you can use today.

You’ll notice there are two types of unit tests in Meteor: In-context and isolated. Isolated unit tests are true unit tests in that the application code or system under test (SUT) is loaded into a VM without any other code. In-context tests are different. The SUT is at unit-level, however the entire Meteor context is loaded. Whilst this is adequate for most needs, it has the possibility for error and the purists will have a thing or two to say about that!

mike:mocha
TDD, BDD Integration, In-Context Unit
This test framework uses Mocha and Chai, and it supports both client and server-side integration testing. It requires a mirror and iframe for client-side integration tests.

STATUS: Most complete framework. It does one thing and it does it really well. Combines mocha with meteor and gives you server and client side integration testing.

sanjo:jasmine
BDD, Integration, In-Context Unit, Isolated Unit
This test framework uses Jasmine 2.0 and features client-side integration testing with server-side unit testing. The unit-testing portion utilizes auto-stubbing. Support for client-side unit testing and server-side integration testing are planned. This framework also requires a mirror and iframe for client-side integration tests.

STATUS: Most ambitious framework. It addresses the extremely difficult issue of isolated unit testing and adds auto-stubbing to it! As such, it requires a more advanced understanding of testing to use at this early stage.

clinical:nightwatch
End-to-end Browser Automation
A CLI-based Nightwatch.js wrapper, this framework uses Selenium webdriver and supports SauceLabs & BrowserStack. Nightwatch.js is a tried and tested UI testing solution and sports a proprietary syntax for defining test cases.

STATUS: Runs as a shell script and reports results back to Velocity. Doesn’t use Velocity’s test cycle . Not currently compatible with the velocity-ci command.

nblazer:casperjs
End-to-end Headless Browser Automation
A testing framework that uses the popular CasperJS navigation and scripting utility. CasperJS can use either PhantomJS(webkit-based) or SlimerJS (gecko-based) headless browsers. This framework can be used standalone or used within other test-frameworks such as meteor-jasmine or meteor-mocha-web.

STATUS: Ready since it wraps an existing proven technology and is integrated into the Velocity architecture.

numtel:velocity-tinytest
TDD, Integration, In-Context Unit
Whereas TinyTest was designed for packages, this framework allows you to use TinyTest against your application directly. This means you can place files under your project /test directory instead of inside packages.

STATUS: Very new framework that creates a new testing approach using the existing familiar TinyTest.

xolvio:coverage
Code Coverage
This is a plugin that instruments the mirror code using Istanbul and provides code coverage reports for any framework that uses a mirror.

STATUS: Still in alpha, needs a 1.0 update and rigorous testing with user apps.

velocity:meteor-stubs
Unit Testing Add-on
Contains a set of stubs for the core Meteor objects that are used both in the server and client.

Status: Stable. Currently only Jasmine supports these stub files.

Frameworks and Packages in Progress

These are currently being developed and will soon be ready to join the available frameworks:

spacejamio:munit
TDD, BDD, Integration, Package Testing, In-Context Unit
Munit is an existing test package that is being ported to Velocity. It extends the official yet simplistic TinyTest framework and augments it with SinonJS and Chai. There is also talk of bringing Mocha and Jasmine support into this framework.

xolvio:webdriver
End-to-end Browser Automation
The vision behind this framework is to bring tight integration between webdriver browser automation and the Jasmine or Mocha test frameworks.

xolvio:cucumber
ATDD, BDD, End-to-end Browser Automation, Integration, Unit
Cucumber is an industry standard framework for specification-by-example testing. This package brings cucumber to Meteor, allowing you to write features and step definitions using Gherkin syntax coupled with any testing framework.

Other Planned Frameworks and Plugins:

These are items that are on the horizon that are known from talking with Velocity team members and planning the roadmap:

meteor-jshint
Static Code Analysis
JSHint is a popular code analysis package in the javascript world. It analyzes the code based on rules defined in configuration files and fails if standards are not met. This plugin is a Velocity port of JSHint.

browser-launcher
Velocity Extension
This extension allows you to run your integration tests using real browsers instead of an iframe or PhantomJS.

console-reporter
Velocity Extension
This extension reports the status of tests via the console or terminal. This is useful if you do not wish to clutter your app with the html-reporter.

You can browse the Velocity roadmap on Trello for a more detailed peek into what’s planned and what’s in progress.

CHOOSING A FRAMEWORK

Today there are a handful of Velocity-compatible frameworks to use, some overlapping in function, others distinct.

The short answer

You should use either meteor-mocha-web or meteor-jasmine if you want to get reactive in-app reporting goodness today. Wasn’t that short?!

The long answer

As you might expect, this is not a clear-cut scenario, and you have to determine what type of testing capability you need. You will recall from the test-boundaries section in the Fundamentals chapter that an application is made up of small systems (unit boundary-level) that work together to create larger systems (integration boundary-level), which work together to create even larger systems (end-to-end boundary-level). In theory, all you need is a framework that support all the levels and their nuances. Sadly, that framework doesn’t exist (yet), so you need to identify the systems in your application that need testing the most and the framework that will provide the means to test those systems at their boundary-level.

Below is a list of system types typically found in applications as well as the types of tests that are suited for them. See if your application has these systems to get a feel of the type of testing needs your app has.

Contains Custom Algorithms

Algorithms have a high number of execution paths based on the state that enters them. Well-designed algorithms will do one thing and do it very well. This means you want to focus your SUT to be at the unit level, so you’ll need a framework that supports unit testing. If you’re using a library that provides algorithms as opposed to writing your own, then you’ll probably need to apply integration testing.

Uses External APIs

Twillo is a messaging service that allows people to send SMS messages. The integration of Twillo would typically start with a Meteor method call, which would send Twillo servers requests via REST or similar. The SUT boundary here can be drawn between the Meteor method call and the REST call being made at the back-end. This is an integration boundary and would be best tested with integration testing frameworks, as you want to be sure you’re exercising the API correctly.

Uses UI Widgets

Javascript includes are a popular means of adding functionality to websites, such as UserVoice, which allows your site to collect feedback from your users. These integrations are usually best tested with a UI testing framework, as you want to ensure they appear on the page correctly and a user can interact with them.

Has a Complicated UI

Counterintuitively, a complicated UI would benefit just as much from unit testing as it would from UI testing. This is because there are typically a high number of heuristics in complicated UI’s. Consider a drag & drop application that deals with many boundary cases. You would want a combination of unit tests and UI tests to make sure you have good coverage.

Is Mostly Made from Custom Packages

If all the code you write is in package form and the main app is a collection of packages, then you’ll likely want to use a framework that supports package testing. It is possible, however, to structure your packages in such a way that they are testable using an integration or UI framework. You’ll see how to do this in the Testing Packages chapter.

Still confused?

Even the most seasoned testers don’t always know where to start, and sometimes a test may start at the UI and change mid way to an integration or unit level. It’s important to know that the above are guidelines to get you started with a test. The most important part of testing is to actually write tests and reap the benefits. So even if you do draw the boundary of your SUT at the integration level when it could be done better at a unit level, know that both the code and tests will evolve and both will require refactoring over time. As you learn more about testing in this book, you will develop the skills required to refactor tests. You will learn how to practically shift the focus of the SUT up and down the boundary levels as needed, and you will learn the “smells” that tell you when you should reconsider your approach.

Take Home Notes

  • There are core packages, test frameworks and test plugins
  • Test frameworks and plugins are created and maintained by package authors
  • Some Frameworks are stronger than others at specific boundary levels
  • Even if in doubt, start writing a test knowing that you can and probably will refactor with time

Integrating External APIs into your Meteor.js application

$
0
0

Integrating External APIs into your Meteor.js application

05.07.2015

Meteor itself does not rely on REST APIs, but it can easily access data from other services. This article is an excerpt from the book Meteor in Action and explains how you can integrate third-party data into your applications by accessing RESTful URLs from the server-side.

Many applications rely on external APIs to retrieve data. Getting information regarding your friends from Facebook, looking up the current weather in your area, or simply retrieving an avatar image from another website – there are endless uses for integrating additional data. They all share a common challenge: APIs must be called from the server, but an API usually takes longer than executing the method itself. You need to ensure that the result gets back to the client – even if it takes a couple of seconds. Let’s talk about how to integrate an external API via HTTP.

Based on the IP address of a visitor, you can tell various information about their current location, e.g., coordinates, city or timezone. There is a simple API that takes an IPv4 address and returns all these tidbits as a JSON object. The API is called Telize.

Making RESTful calls with the http package

In order to communicate with RESTful external APIs such as Telize, you need to add the http package:

While the http package allows you to make HTTP calls from both client and server, the API call in this example will be performed from the server only. Many APIs require you to provide an ID as well as a secret key to identify the application that makes an API request. In those cases you should always run your requests from the server. That way you never have to share secret keys with clients.

Let’s look at a graphic to explain the basic concept.

A user requests location information for an IP address (step 1). The client application calls a server method called geoJsonforIp(step 2) that makes an (asynchronous) call to the external API using the HTTP.get() method (step 3). The response (step 4) is a JSON object with information regarding the geographic location associated with an IP address, which gets sent back to the client via a callback (step 5).

Using a synchronous method to query an API

Let’s add a method that queries telize.com for a given IP address as shown in the following listing. This includes only the bare essentials for querying an API for now. Remember: This code belongs in a server-side only file or inside a if (Meteor.isServer) {}block.

01.Meteor.methods({
02.// The method expects a valid IPv4 address
03.'geoJsonForIp': function (ip) {
04.console.log('Method.geoJsonForIp for', ip);
05.// Construct the API URL
06.var apiUrl = 'http://www.telize.com/geoip/' + ip;
07.// query the API
08.var response = HTTP.get(apiUrl).data;
09.return response;
10.}
11.});

Once the method is available on the server, querying the location of an IP works simply by calling the method with a callback from the client:

1.Meteor.call('geoJsonForIp', '8.8.8.8', function(err,res){
2.console.log(res);
3.});

While this solution appears to be working fine there are two major flaws to this approach:

  1. If the API is slow to respond requests will start queuing up.
  2. Should the API return an error there is no way to return it back to the UI.

To address the issue of queuing, you can add an unblock() statement to the method:

Calling an external API should always be done asynchronously. That way you can also return possible error values back to the browser, which will solve the second issue. Let’s create a dedicated function for calling the API asynchronously to keep the method itself clean.

Using an asynchronous method to call an API

The listing below shows how to issue an HTTP.get call and return the result via a callback. It also includes error handling that can be shown on the client.

01.var apiCall = function (apiUrl, callback) {
02.// try…catch allows you to handle errors
03. 
04.try {
05.var response = HTTP.get(apiUrl).data;
06.// A successful API call returns no error
07.// but the contents from the JSON response
08.callback(null, response);
09.} catch (error) {
10.// If the API responded with an error message and a payload
11.if (error.response) {
12.var errorCode = error.response.data.code;
13.var errorMessage = error.response.data.message;
14.// Otherwise use a generic error message
15.} else {
16.var errorCode = 500;
17.var errorMessage = 'Cannot access the API';
18.}
19.// Create an Error object and return it via callback
20.var myError = new Meteor.Error(errorCode, errorMessage);
21.callback(myError, null);
22.}
23.}

Inside a try…catch block, you can differentiate between a successful API call (the try block) and an error case (the catch block). A successful call may return null for the error object of the callback, an error will return only an error object and null for the actual response.

There are different types of errors and you want to differentiate between a problem with accessing the API and an API call that got an error inside the returned response. This is what the if statement checks for – in case the error object has a response property both code and message for the error should be taken from it; otherwise you can display a generic error 500 that the API could not be accessed.

Each case, success and failure, returns a callback that can be passed back to the UI. In order to make the API call asynchronous you need to update the method as shown in the next code snippet. The improved code unblocks the method and wraps the API call in a wrapAsync function.

01.Meteor.methods({
02.'geoJsonForIp': function (ip) {
03.// avoid blocking other method calls from the same client
04.this.unblock();
05.var apiUrl = 'http://www.telize.com/geoip/' + ip;
06.// asynchronous call to the dedicated API calling function
07.var response = Meteor.wrapAsync(apiCall)(apiUrl);
08.return response;
09.}
10.});

Finally, to allow requests from the browser and show error messages you should add a template similar to the following code.

01.<template name="telize">
02.<p>Query the location data for an IPp>
03.<input id="ipv4" name="ipv4" type="text" />
04.<button>Look up locationbutton>
05.
06.{{#with location}}
07.
08.{{#if error}}
09.<p>There was an error: {{error.errorType}} {{error.message}}!p>
10.{{else}}
11.<p>The IP address {{location.ip}} is in {{location.city}}
12.({{location.country}}).p>
13.{{/if}}
14.{{/with}}
15.template

A Session variable called location is used to store the results from the API call. Clicking the button takes the content of the input box and sends it as a parameter to the geoJsonForIp method. The Session variable is set to the value of the callback.

This is the required JavaScript code for connecting the template with the method call:

01.Template.telize.helpers({
02.location: function () {
03.return Session.get('location');
04.}
05.});
06. 
07.Template.telize.events({
08.'click button': function (evt, tpl) {
09.var ip = tpl.find('input#ipv4').value;
10.Meteor.call('geoJsonForIp', ip, function (err, res) {
11.// The method call sets the Session variable to the callback value
12.if (err) {
13.Session.set('location', {error: err});
14.} else {
15.Session.set('location', res);
16.return res;
17.}
18.});
19.}
20.});

As a result you will be able to make API calls from the browser just like in this figure:

And that’show to integrate an external API via HTTP!

Published at DZone with permission of its author, Stephan Hochhaus.


Isomorphic Apps with Meteor: Working With the Session Object

$
0
0

Isomorphic Apps with Meteor: Working With the Session Object

03.15.2015
The HTML5 Zone is presented by New Relic. Discover how to deliver great apps and win more users with New Relic Mobile.

This article, excerpted from Meteor in Action gives you a simple example on how to use Meteor’s reactivity with the Session object and a dropdown list.

Traditionally accessing a web site via HTTP is stateless. A user requests one document after another. Because there is often the need to maintain a certain state between requests, for example keep a user logged in, the most essential way to store volatile data in a web application is the session. Meteor’s concept of a session is different from languages such as PHP, where a dedicated session object exists on the server or in a cookie. Meteor does not use HTTP cookies but the browser’s localStorageinstead, for example for storing session tokens to keep a user logged in.

A dedicated Session object that is only available on the client and lives in memory only is useful for keeping track of current user contexts and actions.

The Session object

The Session object holds key-value pairs, which can only be used on the client. Technically, it is a reactive dictionary that provides a get() and a set() method. Until a Session key is associated via set() it remains undefined. This can be avoided by setting a default value using setDefault() which works exactly as set(), but only if the value is currently undefined. As a frequent operation is to check for a session value, the Session object provides an equals() function. It is not necessary to declare a new Session variable using the var syntax, it becomes available as soon as a set() or setDefault() command is used.

This is the required syntax:

1.Session.setDefault("key", "default value");  // #1
2.Session.get("key");                          // #2
3.Session.set("key","new value");              // #3
4.Session.equals("key","expression");          // #4
  1. setDefault() only sets a value for key if the key is undefined
  2. returns current value (in this case the default value)
  3. assigns a new value to key
  4. translates to Session.get(“key”) === ” expression” but is more efficient as it does not need to iterate through all keys withinSession

Good to know Although a Session variable is typically used with strings, it can also hold arrays or objects.

Let’s see how we can apply the Session object to an application. In the corresponding book chapter we are building a housesitting app so we take that as an example. Consider Session to be the app’s short-term memory for keeping track of a currently selected house.

Using Session to store selected dropdown values

For the selectHouse template all we need to select a house from the database is a dropdown list. The idea is to retrieve all documents from the database and show all available names. Once a name is selected, it is going to define the data context for all other templates and a single house is displayed. We will be using the code shown below.

1.<template name="selectHouse">
2.<select id="selectHouse">
3.<option value="" {{isSelected}}></option>
4.{{#each housesNameId}}
5.<option value="{{_id}}" {{isSelected}}>{{name}}</option>
6.{{/each}}
7.</select>
8.</template>

We will now use a Session variable called selectedHouse to store the dropdown selection. Since the select box should reflect the actual selection, it needs to add a selected attribute to the currently selected option. In order to do so, we define a second helper named isSelected that returns either an empty string or selected, if the value of _id equals that of our Session variable.

As the last step we need to set the value for the Session variable based on the user’s selection. Because it involves an action coming from the user this requires an event map.

Whenever the value for the DOM element with the ID selectHouse changes, the event handler will set the selectedHousevariable to the value from the selected option element. Note that we need to pass the event as an argument to the JavaScript function that sets the Session value in order to access its value:

01.Template.selectHouse.helpers({
02.housesNameId: function () {             // #1
03.return HousesCollection.find({}, {});
04.},
05.isSelected: function () {               // #2
06.return Session.equals('selectedHouse', this._id) ? 'selected' : '';
07.}
08.});
09.Template.selectHouse.events = {
10.'change #selectHouse': function (evt) { // #3
11.Session.set("selectedHouse", evt.currentTarget.value);
12.}
13.};
  1. returns all documents from the collection
  2. returns selected if the _id for the currently processed house equals that stored inside the Session variable
  3. remember to pass the event as an argument so the function can assign the selection value to the Session variable

You can test that everything works correctly by opening the JavaScript console inside a browser and selecting a value from the dropdown list. You can get and set values for the variable directly inside your console as well. If you change the value to a valid_id you can see that the dropdown list instantly updates itself due to the isSelected helper as you can see in the figure.

Creating a reactive context using Tracker.autorun

When working with JavaScript code you will often need to check for the value of a variable to better understand why an application behaves the way it does. Using console.log() to keep track of variable contents is one of the most important tools for debugging. Since we are dealing with reactive data sources we can also take advantage of computations to monitor the actual values of those sources. Simply put, we are going to print the contents of the reactive Session variable anytime it changes. In order to do so we will create a reactive context for the execution of console.log().

Besides Templates and Blaze there is a third way to establish a context that enables reactive computations: Tracker.autorun(). Any function running inside such a block is automatically rerun whenever its dependencies (i.e., the reactive data sources used within it) change. Meteor automatically detects which data sources are used and sets up the necessary dependencies.

We can keep track of the value for Session.get(“selectedHouse”) by putting it inside an autorun. We place this code at the very beginning of the client.js file, outside of any Template blocks. Whenever we use the drop down list to select another value, the console immediately prints the currently selected ID. If no house is selected it will print undefined.

Use Tracker.autorun to print a Session variable to the console like this:

1.Tracker.autorun(function() {
2.console.log("The selectedHouse ID is: " +
3.Session.get("selectedHouse"));
4.});

As you can see, the Session object is very simple to work with and can be extremely useful. It can be accessed from any part of the application and maintains its values even if you change source files, and Meteor reloads your application (hot code pushes). If a user initiates a page refresh all data is lost though.

Keep in mind, though, that the contents of a Session object never leave the browser, so other clients or even the server may never access its contents.

Published at DZone with permission of its author, Stephan Hochhaus.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)


The illustrated guide to mobile apps with Meteor

$
0
0

The illustrated guide to mobile apps with Meteor

Meteor is an amazing tool to create applications for the mobile devices. This guide shows you how to use Meteor to build, test, and send an app to the app stores.

Note This tutorial is a bit long and goes through all required steps from initial app creation to download from the app stores! You’ll need to walk through several steps that are listed in this agenda:

The app: Reactive H-Ball

In this tutorial you are going to build a simple application that helps you answer life’s most pressing questions, such as “Will I be pretty?” or “Will I be rich?” My last name starts with an H and the image we’ll use is that of a ball, so it’s probably a good idea to give it a name to resemble that. Because it uses Meteor’s reactive powers it shall be called the Reactive H-Ball. Not to be mistaken for a magic 8-ball, of course.

Building the application

We implement limited functionality on purpose to focus on the specifics of mobile applications. All the app should do is to display an answer to a yes/no-question when the device is shaken. We’ll use an animation to make the otherwise quite boring application more mysterious.

Find the application sources on GitHub: https://github.com/yauh/reactive-hball.

The basics

Let’s start with a shell command – assuming you have Meteor installed:

$ meteor create reactiveHBall

Creating a new application in the Terminal

This leaves use with the new application and the three file structure:

  • reactiveHBall.css
  • reactiveHBall.html
  • reactiveHBall.js

Because the application will be so simple, we may keep this structure. All we need is to add a folder named public/, which is where the graphics file shall live.

Next, let’s take care of the core functionality. We’ll use a single template named hball, which will be used to display all content. It needs a single template helper{{answer}} and a button. Why a button? It’s easier for us to test this for now and only later substitute the type of event we’re listening to, so the button fullfils essentially the same purpose as a shake in the final app.

<template name="hball">
  <button id="shaker">Shake</button>
  <div id="ball">
    <div id="text">
      {{answer}}
    </div>
  </div>
</template>

The answer is stored inside a Session variable called answer. As such, the template helper looks like this:

Template.hball.helpers({
  answer: function () {
    return Session.get('answer');
  }
});

All answers are stored inside a simple array named answers. The value for the Session variable is set randomly by the click event:

Template.hball.events({
  'click button': function () {
    Session.set('answer', answers[
      // get a random number from the answers array
      Math.floor(Math.random() * answers.length)
    ]);
  }
});

Adding more style

Now that the core functionality is in place, let’s add some style.

First, we need the H-Ball to be an actual ball. We are dealing with various screen sizes, so we must ensure that the ball scales well to various resolutions. Therefore we need a responsive approach, however we keep it simple (put this in your CSS file):

#ball {
  position: relative;
  display: inline-block;
  width: 100%;
  height: 0;
  padding: 50% 0;
  border-radius: 50%;
  background: #8fa4cd;
  color: white;
  font-size: 28px;
  line-height: 0;
  text-align: center;
}

Much better already. Now for the animation. This will be tied to the event (the button click for now, later on the shake):

Template.hball.events({
  'click button': function () {
    // fade in the text to make it more mysterious
    $('#text').fadeOut('1200', function () {
      Session.set('answer', answers[
        Math.floor(Math.random() * answers.length)
      ]);
      $('#text').fadeIn();
    });
  }
});

Simple enough, right? Let’s check out how it looks in an actual mobile device. There is an easy way to do so if you are using Chrome. Open up the developer tools and click on the icon next to the magnifying glass that looks like a smartphone. That brings up the device emulation that allows you to check how the screen looks on various devices.

Device Emulation in Chrome

Add mobile platforms

Back to the shell, we need to issue some commands to add support for iOS and Android. For adding iOS support you must have a Mac OS X machine and downloaded and opened Xcode at least once (opening it will make you accept some licenses, plus it guides you through creating the required Signing Identities and Provisioning Profiles – you need this for publishing to devices and it requires an Apple Developer account).

Issue the following commands for iOS support:

$ meteor install-sdk ios
$ meteor add-platform ios

Issue the following commands for Android support:

$ meteor install-sdk android
$ meteor add-platform android

Now the application can be run on both Apple devices as well as Android phones and tablets.

Running on device emulators

Let’s run on a device emulator first.

Run the app on the Android simulator:

$ meteor run android

The app on the Android simulator

Can you see it? We have too much text in the upper half of the screen. We should fix that. Maybe it helps to remove the shake button, but first, how does the app look on iOS? Run the app on the iPhone simulator!

$ meteor run ios

Preview on the iPhone simulator

That looks nice enough. Now shake it, baby!

Adding the shake

There are many ways to add device-specific functionality to a Meteor application. The simplest is to add a Meteor package (Isopacks). That way you do not have to deal with Cordova plugins (although you could) and deal with the limitations regarding version management (tl;dr: Meteor is not smart enough to determine version constraints from Cordova plugins, only from proper Isopacks).

The shake package is perfect for our purposes, add it via

$ meteor add shake:shake

You can find additional information regarding this package at its GitHub page. This is the code you will need to add to the JavaScript file to enable shaking for an answer (this should only run on the client!):

// avoid accidental "shakes" with a high enough sensitivity
var shakeSensitivity = 30;

// watch for shakes while the app runs
Meteor.startup(function () {
  if (shake && typeof shake.startWatch === 'function') {
    shake.startWatch(onShake, shakeSensitivity);
  } else {
    alert('Shake not supported');
  }
});

// onShake show an answer
// debounce ensures the function does not execute multiple times during shakes
onShake = _.debounce(function onShake() {
    console.log('device was shaken');
    $('#text').fadeOut('1200', function () {
      Session.set('answer', answers[
        Math.floor(Math.random() * answers.length)
      ]);
      $('#text').fadeIn();
    });
  },
  // fire the shake as soon as it occurs,
  // but not again if less than 1200ms have passed
1200, true);

Running on hardware devices

Do you know your machine’s IP? Find it out, mine is 192.168.2.123. You’ll need it because the mobile device must connect to your machine somehow.

First, we need to create a new file named mobile-config.js in the root of the project. For now it will only contain a single line:

App.accessRule('http://192.168.2.123:3000/*');

By default, Cordova applications (this is the technology used by Meteor to create mobile applications) may not access any URLs unless they are whitelisted. Adjust the line above to your IP address and port where Meteor runs.

On Android you must first enable debugging over USB on your phone. Connect your Android phone, unblock it and start Meteor using

$ meteor run android-device --mobile-server http://192.168.2.123:3000

Of course the device must be able to access the IP address. You can check first by running meteor without any mobile options and trying to open the page using the mobile browser.

It takes a couple of seconds, now you will be able to run the application on the device. I’m testing with a Nexus 5 and here comes the big surprise: The ball is a square!

This is one of the reasons you must test not only with a simulator, but also on actual devices. Otherwise this square would have gone unnoticed, perhaps even get published to the Play Store!

Is iOS any better? Connect your iPhone/iPad and rush to the shell:

$ meteor run ios-device --mobile-server http://192.168.2.123:3000

The previous command opens XCode and does not actually run the app on the device. Once Xcode is opened you can select any of the simulated or connected devices. I select the hardware device and test. Fortunately all looks as it should and even the shakes work fine.

Optimizations

Now that the core application is done, we can focus on some nice-to-haves.

Fix the appearance

Let’s address the issue with the circle being a square on Nexus first – it is as simple as adding the proper viewport to the HTML head section:

<head>
  <title>Reactive H-Ball</title>
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
</head>

Next, get rid of that shake button. I simply uncomment it, so I can easily re-enable it for debugging purposes later.

Finally, we’ll use an image from the public/ folder instead of the <h1> title:

<img src="title.png" alt="The Reactive H-Ball" width="100%" />

Now why do we use an image? In this application we do not need any images, but in most other apps you will have some resources in public/ you’d want to share, so this sample app should cover that use case.

Offline applications

This application is so small and simple, it does not even take a server to do what it’s supposed to. In fact, many mobile applications should be able to survive without a network connection.

The first thing our Reactive H-Ball app should do upon startup is disconnect itself from the server. This is because Meteor does not currently allow/support client-only applications, and as just mentioned, Reactive H-Ball just doesn’t need a server. So, yes, we’re creatively working around the current limitations of Meteor a bit, but it’s okay. If app needs to connect to a server you should skip this part.
Add this line to the startup block:

Meteor.disconnect();

When you run on a device now, you do not have to pass the mobile-server parameter anymore, it isn’t used anyway. Note: This will disable our ability to update the application code without re-submitting the app to the Play/iTunes Store!

All application files should come from the device itself. There is no need to request them from a server and, in fact, there is no server to request them from since you’ve already disconnected. Adding the appcache package enables the application to store those application files in the localStorage. This also includes the contents of thepublic/ folder, which is where the title image lives:

Meteor offers a package called appcache which uses localStorage to store our apps’s assets. Since Reactive H-Ball is a mobile application it will not benefit from adding the package because it already has all asset in the app that was installed. Using the appcache package is only recommended for applications that run in the browser and require support for offline usage.

Note that, if we were utilizing any MongoDB collections in our app, we’d want to add a means of persisting our data when the app closes. Currently, GroundDB is the most suitable solution for handling that situation.

Preparing for distribution

The app is almost finished, but we have a few final steps to complete before it is ready for the app store shelves. That includes adding some meta information, images, and submitting a binary build of the app to the stores (which you need to have an account for, obviously).

App information

Remember the mobile-config.js file we used for whitelisting the URL? It’s also used to set all meta information for our app. Adjust the block as needed:

App.info({
  // the bundle ID must be unique across the entire app store
  // usually reverse domains of the creators are used
  id: 'de.yauh.reactivehball',
  version: '1.0.0',
  name: 'ReactiveHBall',
  description: 'The all-knowing H-Ball answers your questions about life',
  author: 'Stephan Hochhaus',
  email: 'stephan@yauh.com',
  website: 'http://yauh.de/building-mobile-apps-with-meteor-the-reactive-h-ball/'
});

Icons and Splash Screens

For each device our application needs a dedicated icon and splash screen. Although there are many generators on the web that allow you to upload images and let you download the finished assets I prefer to use Ionic locally. If you haven’t used Ionic yet (it’s simple, but requires a bunch of shell commands to create a new app, add platforms, and generate resources), use a web service like Image Gorilla or check out the hybrid:assets package.

Once the images are available you can add them to the application folder and adjust mobile-config.js (this is just an extract, there are many more sizes to add):

App.icons({
  // iOS
  'iphone': 'mobile-resources/ios/icon/Icon-60.png',
  // Android
  'android_ldpi': 'mobile-resources/android/icon/drawable-ldpi-icon.png',
});

App.launchScreens({
  // iOS
  'iphone': 'mobile-resources/ios/splash/Default~iphone.png',
  // Android
  'android_ldpi_portrait': 'mobile-resources/android/splash/drawable-port-ldpi-screen.png',
});

Build and deploy the application

Step 1 is to actually build the application. Use

$ meteor build --server=http://hball.yauh.de ../build

That will ensure that a) the building process works and b) the files necessary for submittal to the app store get created.

Let’s assume that at some point the H-Ball application should contact the server. In that case you may want to deploy it on the web. The simplest way is to use mup.

Take some screenshots

When putting the app on the app stores you want people to see what the app looks like. Now’s the time to take a few screenshots of your app in action on both Android and iPhone devices.

Submitting to the stores

We start with the following file structure:

  • app contains the actual application sources
  • build contains the output of the meteor build command
  • build/android holds the necessary files for submitting to the Play Store
  • build/ios holds the necessary files for submitting to the iTunes Store

File structure of our project

Google Play

You need a Google developer account (one-time fee of $25). Once you have an account you can submit your applications at the developer console. Before we can do so, you must sign your application bundle or apk file.

Step 1 is to create a key. This key is essential and you mustn’t lose it. If the key is lost, you will then lose the ability to publish updates! Create a key via the shell command:

$ keytool -genkey -alias reactive-hball -keyalg RSA -keysize 2048 -validity 10000

Back up the cert file using the command

$ keytool -exportcert -alias reactive-hball -file hball.cer

Step 2 is to sign the unaligned application file. Using the command in the previous step (when we used meteor build) created an android/ directory inside our project. There you can find a file named unaligned.apk. This needs to be signed.

Open a shell and cd to the android directory. The command used for signing the apk-file is

$ jarsigner -digestalg SHA1 unaligned.apk reactive-hball

Once the file is signed, it needs to be aligned. Do that by issuing

$ ~/.meteor/android_bundle/android-sdk/build-tools/21.0.0/zipalign 4 unaligned.apk production.apk

Now you have a new file production.apk. This should be submitted to the Play Store.

Step 3 is done in the Developer Console – a web interface for submitting your app. I hope you like filling out forms, because you need to do this quite a bit.

Editing product details on Google Play Store

Once you have uploaded the apk file you need to fill out the product details and upload some screenshots and icons. Green checkmarks will let you know when you’re done. Once all is green you can hit the Publish app button.

iTunes Store

You need an Apple Developer account, which will need to be renewed annually or else your apps will be removed from the store! That also means you need to pay $99 each year to stay listed. Got that? Then you’re ready to submit your app.

I assume you have set up a signing identity and provisioning profile (basically Xcode walks you through this, it’s very simple). It’s probably equivalent to the signing and aligning for Android, just in the Xcode interface.

Start by registering an App ID. Click on the link (yes, that one and add the

Registering an explicit App ID

Next, head over to the iTunes Connect site. You will need to set up your application before we need to open Xcode. Go to My Apps and click on the + sign to add a new iOS app. Most of the fields take whatever you want, except for the bundle identifier. Make sure to use the correct one (case matters!).

Create a new iOS app in iTunes Connect

If you like, you can start digging through even more forms to prepare your marketing collateral to publish the app. You might as well do it now, because you’ll have to eventually. Fill in all information and upload screenshots and come back here when you’re done.

Open the Xcode project file Reactive H-Ball.xcodeproj. Make sure an iOS device is connected via USB; you will need it.

Check the project settings under General and make sure there is a team selected for Identity and the correct bundle identifier is used. If the bundle identifier is wrong, you can adjust it in the Info settings.

Project Info settings in Xcode

One more thing – set the deployment target to be iOS 7.1. Dealing with legacy versions of iOS requires some more work and a minimum of iOS 7.1 requires less icons and splash screens, hence making life easier overall.

Deployment Target iOS 7.1

Select the hardware device just like you did when running on the device. Now open the Product menu and click Archive.

Creating an archive in Xcode

Another window pops open with a list of all available archives. Perform a validation of the archive before you submit it to the App Store.

If that went well, you should now (finally!) be able to click on Submit to App Store… You’ll be rewarded by a “Submission Successful” message.

Submission Successful

Unfortunately, in my case I saw an error message. There were some app icons for iPad missing, so I needed to fix this. Back to the project settings and clicking on Use Asset Catalog for the App Icons.

Use Asset Catalog for App Icons

A new screen will open and you can drag the missing icons (in my case icon-76.png and the retina version) into the asset library).

Drag missing icons in asset library

Alas, the app will still not go to the app store. More hoops to jump through! Return to the iTunes Connect site. Before the app can be released, it must be tested. D’oh. You need a user with the “Internal Tester” role and active Test Flight (All under the prerelease tab).

Prerelease Builds

Once you invited a user (go to internal testers and select one or more users and click Invite) all testers will receive an email that they can start testing:

Invitation to test using Test Flight

Install the TestFlight app on your device and then inside of it, open up your new mobile app.

Once everything is tested, you should add the build in the iTunes Connect Versions tab of your application under Build.

Adding a build

With a build added you can finally submit the app to be reviewed by Apple. Let the waiting begin.

1.0 Waiting for Review

Now might be a good time to check the current wait time for app reviews at appreviewtimes.com.

Get the apps


Get it on Google Play

And soon
Available on the App store


聊聊移动端跨平台开发的各种技术

$
0
0

介绍

最近出现的 React Native 再次让跨平台移动端开发这个话题火起来了,曾经大家以为在手机上可以像桌面那样通过 Web 技术来实现跨平台开发,却大多因为性能或功能问题而放弃,不得不针对不同平台开发多个版本。

但这并没有阻止人们对跨平台开发技术的探索,毕竟谁不想降低开发成本,一次编写就处处运行呢?除了 React Native,这几年还出现过许多其它解决方案,本文我将会对这些方案进行技术分析,供感兴趣的读者参考。

为了方便讨论,我将它们分为了以下 4 大流派:

  • Web 流:也被称为 Hybrid 技术,它基于 Web 相关技术来实现界面及功能
  • 代码转换流:将某个语言转成 Objective-C、Java 或 C#,然后使用不同平台下的官方工具来开发
  • 编译流:将某个语言编译为二进制文件,生成动态库或打包成 apk/ipa/xap 文件
  • 虚拟机流:通过将某个语言的虚拟机移植到不同平台上来运行

Web 流

Web 流是大家都比较了解的了,比如著名的 PhoneGap/Cordova,它将原生的接口封装后暴露给 JavaScript,可以运行在系统自带的 WebView 中,也可以自己内嵌一个 Chrome 内核

作为这几年争论的热点,网上已经有很多关于它的讨论了,这里我重点聊聊大家最关心的性能问题。

Web 流最常被吐槽的就是性能慢(这里指内嵌 HTML 的性能,不考虑网络加载时间),可为什么慢呢?常见的看法是认为「DOM 很慢」,然而从浏览器实现角度来看,其实 DOM 就是将对文档操作的 API 暴露给了 JavaScript,而 JavaScript 的调用这些 API 后就进入内部的 C++ 实现了,这中间并没有多少性能消耗,所以从理论上来说浏览器的 DOM 肯定比 Android 的「DOM」快,因为 Android 的展现架构大部分功能是用 Java 写的,在实现相同功能的前提下,C++ 不大可能比 Java 慢(某些情况下 JIT 编译优化确实有可能做得更好,但那只是少数情况)。

所以从字面意思上看「DOM 很慢」的说法是错误的,这个看法之所以很普遍,可能是因为大部分人对浏览器实现不了解,只知道浏览器有 DOM,所以不管什么问题都只能抱怨它了。

那么问题在哪呢?在我看来有三方面的问题:

  • 早期浏览器实现比较差,没有进行优化
  • CSS 过于复杂,计算起来更耗时
  • DOM 提供的接口太有限,使得难以进行优化

第一个问题是最关键也是最难解决的,现在说到 Web 性能差主要说的是 Android 下比较差,在 iOS 下已经很流畅了,在 Android 4 之前的 WebView 甚至都没有实现 GPU 加速,每次重绘整个页面,有动画的时候不卡才怪。

浏览器实现的优化可以等 Android 4.4 慢慢普及起来,因为 4.4 以后就使用 Chrome 来渲染了。

而对于最新的浏览器来说,渲染慢的原因就主要是第二个问题:CSS 过于复杂,因为从实现原理上看 Chrome 和 Android View 并没有本质上的差别,但 CSS 太灵活功能太多了,所以计算成本很高,自然就更慢了。

那是不是可以通过简化 CSS 来解决?实际上还真有人这么尝试了,比如 Famo.us,它最大的特色就是不让你写 CSS,只能使用固定的几种布局方法,完全靠 JavaScript 来写界面,所以它能有效避免写出低效的 CSS,从而提升性能。

而对于复杂的界面及手机下常见的超长的 ListView 来说,第三个问题会更突出,因为 DOM 是一个很上层的 API,使得 JavaScript 无法做到像 Native 那样细粒度的控制内存及线程,所以难以进行优化,则在硬件较差的机器上会比较明显。对于这个问题,我们一年前曾经尝试过嵌入原生组件的方式来解决,不过这个方案需要依赖应用端的支持,或许以后浏览器会自带几个优化后的 Web Components 组件,使用这些组件就能很好解决性能问题。

现阶段这三个问题都不好解决,所以有人想干脆不用 HTML/CSS,自己来画界面,比如 React canvas 直接画在 Canvas 上,但在我看来这只是现阶段解决部分问题的方法,在后面的章节我会详细介绍自己画 UI 的各种问题,这里说个历史吧,6 年前浏览器还比较慢的时候,Bespin 就这么干过,后来这个项目被使用 DOM 的 ACE 取代了,目前包括 TextMirror 和 Atom 在内的主流编辑器都是直接使用 DOM,甚至 W3C 有人专门写了篇文章吐槽用 Canvas 做编辑器的种种缺点,所以使用 Canvas 要谨慎。

另外除了 Canvas,还有人以为 WebGL 快,就尝试绘制到 WebGL 上,比如 HTML-GL,但它目前的实现太偷懒了,简单来说就是先用html2canvas 将 DOM 节点渲染成图片,然后将这个图片作为贴图放在 WebGL 中,这等于将浏览器中用 C++ 写的东东在 JavaScript 里实现了一遍,渲染速度肯定反而更慢,但倒是能用 GLSL 做特效来忽悠人。

硬件加速不等同于「快」,如果你以为硬件加速一定比软件快,那你该抽空学学计算机体系结构了

其实除了性能问题,我认为在 Web 流更严重的问题是功能缺失,比如 iOS 8 就新增 4000+ API,而 Web 标准需要漫长的编写和评审过程,增加 4000 API 我这辈子是等不到了,即便是 Cordova 这样自己封装也忙不过来,所以为了更好地使用系统新功能,写 Native 代码是必须的。

P.S. 虽然前面提到 HTML/CSS 过于复杂导致性能问题,但其实这正是 Web 最大的优势所在,因为 Web 最初的目的就是显示文档,如果你想显示丰富的图文排版,虽然 iOS/Android 都有富文本组件,但比起 CSS 差太远了,所以在很多 Native 应用中是不可避免要嵌 Web 的。

代码转换流

前面提到写 Native 代码是必须的,但不同平台下的官方语言不一样,这会导致同样的逻辑要写两次以上,于是就有人想到了通过代码转换的方式来减少工作量,比如将 Java 转成 Objective-C。

这种方式虽然听起来不是很靠谱,但它却是成本和风险都最小的,因为代码转换后就可以用官方提供的各种工具了,和普通开发区别不大,因此不用担心遇到各种诡异的问题,不过需要注意生成的代码是否可读,不可读的方案就别考虑了。

接下来看看目前存在的几种代码转换方式。

将 Java 转成 Objective-C

j2objc 能将 Java 代码转成 Objective-C,据说 Google 内部就是使用它来降低跨平台开发成本的,比如 Google Inbox 项目就号称通过它共用了 70% 的代码,效果很显著。

可能有人会觉得奇怪,为何 Google 要专门开发一个帮助大家写 Objective-C 的工具?还有媒体说 Google 做了件好事,其实吧,我觉得 Google 这算盘打得不错,因为基本上重要的应用都会同时开发 Android 和 iOS 版本,有了这个工具就意味着,你可以先开发 Android 版本,然后再开发 iOS 版本。。。

既然都有成功案例了,这个方案确实值得尝试,而且关键是会 Java 的人多啊,可以通过它来快速移植代码到 Objective-C 中。

将 Objective-C 转成 Java

除了有 Java 转成 Objective-C,还有 Objective-C 转成 Java 的方案,那就是 MyAppConverter,比起前面的 j2objc,这个工具更有野心,它还打算将 UI 部分也包含进来,从它已转换的列表中可以看到还有 UIKit、CoreGraphics 等组件,使得有些应用可以不改代码就能转成功,不过这点我并不看好,对于大部分应用来说并不现实。

由于目前是收费项目,我没有尝试过,对技术细节也不了解,所以这里不做评价。

将 Java 转成 C#

Mono 提供了一个将 Java 代码转成 C# 的工具 Sharpen,不过似乎用的人不多,Star 才 118,所以看起来不靠谱。

还有 JUniversal 这个工具可以将 Java 转成 C#,但目前它并没有发布公开版本,所以具体情况还待了解,它的一个特色是自带了简单的跨平台库,里面包括文件处理、JSON、HTTP、OAuth 组件,可以基于它来开发可复用的业务逻辑。

比起转成 Objective-C 和 Java 的工具,转成 C# 的这两个工具看起来都非常不成熟,估计是用 Windows Phone 的人少。

将 Haxe 转成其它语言

说到源码转换就不得不提 Haxe 这个奇特的语言,它没有自己的虚拟机或可执行文件编译器,所以只能通过转成其它语言来运行,目前支持转成 Neko(字节码)、Javascript、Actionscript 3、PHP、C++、Java、C# 和 Python,尽管有人实现了转成 Swift 的支持,但还是非官方的,所以要想支持 iOS 开发目前只能通过 Adobe AIR 来运行。

在游戏开发方面做得不错,有个跨平台的游戏引擎 OpenFL 的,最终可以使用 HTML5 Canvas、OpenGL 或 Flash 来进行绘制,OpenFL 的开发体验做得相当不错,同一行代码不需要修改就能编译出不同平台下的可执行文件,因为是通过转成 C++ 方式进行编译的,所以在性能和反编译方面都有优势,可惜目前似乎并不够稳定,不然可以成为 Cocos2d-x 的有利竞品。

在 OpenFL 基础上还有个跨平台的 UI 组件 HaxeUI,但界面风格我觉得特别丑,也就只能在游戏中用了。

所以目前来看 Haxe 做跨平台游戏开发或许可行,但 APP 开发就别指望了,而基于它来共用代码实在就更不靠谱了,因为熟悉它的开发者极少,反而增加成本。

XMLVM

除了前面提到的源码到源码的转换,还有 XMLVM 这种与众不同的方式,它首先将字节码转成一种基于 XML 的中间格式,然后再通过 XSL 来生成不同语言,目前支持生成 C、Objective-C、JavaScript、C#、Python 和 Java。

虽然基于一个中间字节码可以方便支持多语言,然而它也导致生成代码不可读,因为很多语言中的语法糖会在字节码中被抹掉,这是不可逆的,以下是一个简单示例生成的 Objective-C 代码,看起来就像汇编:

XMLVM_ENTER_METHOD("org.xmlvm.tutorial.ios.helloworld.portrait.HelloWorld", "didFinishLaunchingWithOptions", "?")
XMLVMElem _r0;
XMLVMElem _r1;
XMLVMElem _r2;
XMLVMElem _r3;
XMLVMElem _r4;
XMLVMElem _r5;
XMLVMElem _r6;
XMLVMElem _r7;
_r5.o = me;
_r6.o = n1;
_r7.o = n2;
_r4.i = 0;
_r0.o = org_xmlvm_iphone_UIScreen_mainScreen__();
XMLVM_CHECK_NPE(0)
_r0.o = org_xmlvm_iphone_UIScreen_getApplicationFrame__(_r0.o);
_r1.o = __NEW_org_xmlvm_iphone_UIWindow();
XMLVM_CHECK_NPE(1)
...

在我看来这个方案相当不靠谱,万一生成的代码有问题基本没法修改,也没法调试代码,所以不推荐。

小结

虽然代码转换这种方式风险小,但我觉得对于很多小 APP 来说共享不了多少代码,因为这类应用大多数围绕 UI 来开发的,大部分代码都和 UI 耦合,所以公共部分不多。

在目前的所有具体方案中,只有 j2objc 可以尝试,其它都不成熟。

编译流

编译流比前面的代码转换更进一步,它直接将某个语言编译为普通平台下的二进制文件,这种做法有明显的优缺点:

  • 优点
    • 可以重用一些实现很复杂的代码,比如之前用 C++ 实现的游戏引擎,重写一遍成本太高
    • 编译后的代码反编译困难
    • 或许性能会好些(具体要看实现)
  • 缺点
    • 如果这个工具本身有 Bug 或性能问题,定位和修改成本会很高
    • 编译后体积不小,尤其是如果要支持 ARMv8 和 x86 的话

接下来我们通过区分不同语言来介绍这个流派下的各种方案。

C++ 类

C++ 是最常见的选择,因为目前 Android、iOS 和 Windows Phone 都提供了 C++ 开发的支持,它通常有三种做法:

  • 只用 C++ 实现非界面部分,这是官方比较推崇的方案,目前有很多应用是这么做的,比如 MailboxMicrosoft Office
  • 使用 2D 图形库来自己绘制界面,这种做法在桌面比较常见,因为很多界面都有个性化需求,但在移动端用得还不多。
  • 使用 OpenGL 来绘制界面,常见于游戏中。

使用 C++ 实现非界面部分比较常见,所以这里就不重复介绍了,除了能提升性能和共用代码,还有人使用这种方式来隐藏一些关键代码(比如密钥),如果你不知道如何构建这样的跨平台项目,可以参考 Dropbox 开源的 libmx3 项目,它还内嵌了 json 和 sqlite 库,并通过调用系统库来实现对简单 HTTP、EventLoop 及创建线程的支持。

而如果要用 C++ 实现界面部分,在 iOS 和 Windows Phone 下可以分别使用 C++ 的超集 Objective-C++ 和 C++/CX,所以还比较容易,但在 Android 下问题就比较麻烦了,主要原因是 Android 的界面绝大部分是 Java 实现的,所以用 C++ 开发界面最大的挑战是如何支持 Android,这有两种做法:通过 JNI 调用系统提供的 Java 方法或者自己画 UI。

第一种做法虽然可行,但代码太冗余了比如一个简单的函数调用需要写那么多代码:

JNIEnv* env;
jclass testClass = (*env)->FindClass(env, "com/your/package/name/Test"); // get Class
jmethodID constructor = (*env)->GetMethodID(env, cls, "<init>", "()V");
jobject testObject = (*env)->NewObject(env, testClass, constructor);
methodID callFromCpp = (*env)->GetMethodID(env, testClass, "callFromCpp", "()V"); //get methodid
(*env)->CallVoidMethod(env, testObject, callFromCpp);

那自己画 UI 是否会更方便点?比如 JUCEQT 就是自己画的,我们来看看 QT 的效果:

qt-example

看起来很不错是吧?不过在 Android 5 下就悲剧了,很多效果都没出来,比如按钮没有涟漪效果,甚至边框都没了,根本原因在于它是通过 Qt Quick Controls 的自定义样式来模拟的,而不是使用系统 UI 组件,因此它享受不到系统升级自动带来的界面优化,只能自己再实现一遍,工作量不小。

反而如果最开始用的是 Android 原生组件就什么都不需要做,而且还能用新的 AppCompat 库来在 Android 5 以下实现 Material Design 效果。

最后一种做法是使用 OpenGL 来绘制界面,因为 EGL+OpenGL 本身就是跨平台,所以基于它来实现会很方便,目前大多数跨平台游戏底层都是这么做的。

既然可以基于 OpenGL 来开发跨平台游戏,是否能用它来实现界面?当然是可行的,而且 Android 4 的界面就是基于 OpenGL 的,不过它并不是只用 OpenGL 的 API,那样是不现实的,因为 OpenGL API 最初设计并不是为了画 2D 图形的,所以连画个圆形都没有直接的方法,因此 Android 4 中是通过 Skia 将路径转换为位置数组或纹理,然后再交给 OpenGL 渲染的。

然而要完全实现一遍 Android 的 UI 架构工作量不小,以下是其中部分相关代码的代码量:

路径 代码行数
frameworks/base/core/java/android/widget/ 65622
frameworks/base/core/java/android/view/ 49150
frameworks/base/libs/hwui/ 16375
frameworks/base/graphics/java/android/graphics/ 18197

其中光是文字渲染就非常复杂,如果你觉得简单,那只能说明你没看过这个世界有多大,或许你知道中文有编码问题、英语有连字符(hyphen)折行,但你是否知道繁体中文有竖排版、阿拉伯文是从右到左的、日语有平假名注音(ルビ)、印度语有元音附标文字(abugida አቡጊዳ)……?

而相比之下如果每个平台单独开发界面,看似工作量不小,但目前在各个平台下都会有良好的官方支持,相关工具和文档都很完善,所以其实成本没那么高,而且可以给用户和系统风格保持一致的良好体验,所以我认为对于大多数应用来说自己画 UI 是很不划算的。

不过也有特例,对于 UI 比较独特的应用来说,自己画也是有好处的,除了更灵活的控制,它还能使得不同平台下风格统一,这在桌面应用中很常见,比如 Windows 下你会发现几乎每个必备软件的 UI 都不太一样,而且好多都有换肤功能,在这种情况下很适合自己画 UI。

Xamarin

Xamarin 可以使用 C# 来开发 Android 及 iOS 应用,它是从 Mono 发展而来的,目前看起来商业运作得不错,相关工具及文档都挺健全。

因为它在 iOS 下是以 AOT 的方式编译为二进制文件的,所以把它归到编译流来讨论,其实它在 Android 是内嵌了 Mono 虚拟机 来实现的,因此需要装一个 17M 的运行环境。

在 UI 方面,它可以通过调用系统 API 来使用系统内置的界面组件,或者基于 Xamarin.Forms 开发定制要求不高的跨平台 UI。

对于熟悉 C# 的团队来说,这还真是一个看起来很不错的,但这种方案最大的问题就是相关资料不足,遇到问题很可能搜不到解决方案,不过由于时间关系我并没有仔细研究,推荐看看这篇文章,其中谈到它的优缺点是:

  • 优点
    • 开发 app 所需的基本功能全部都有
    • 有商业支持,而且这个项目对 Windows Phone 很有利,微软会大力支持
  • 缺点
    • 如果深入后会发现功能缺失,尤其是定制 UI,因为未开源使得遇到问题时不知道如何修复
    • Xamarin 本身有些 Bug
    • 相关资源太少,没有原生平台那么多第三方库
    • Xamarin studio 比起 Xcode 和 Android Studio 在功能上还有很大差距

Objective-C 编译为 Windows Phone

微软知道自己的 Windows Phone 太非主流,所以很懂事地推出了将 Objective-C 项目编译到 Windows Phone 上运行的工具,目前这个工具的相关资料很少,鉴于 Visual Studio 支持 Clang,所以极有可能是使用 Clang 的前端来编译,这样最大的好处是以后支持 Swift 会很方便,因此我归到编译流。

而对于 Android 的支持,微软应该使用了虚拟机的方式,所以放到下个章节介绍。

RoboVM

RoboVM 可以将 Java 字节码编译为可在 iOS 下运行的机器码,这有点类似 GCJ,但它的具体实现是先使用 Soot 将字节码编译为 LLVM IR,然后通过 LLVM 的编译器编译成不同平台下的二进制文件。

比如简单的 new UITextField(new CGRect(44, 32, 232, 31)) 最后会变如下的机器码(x86):

call imp___jump_table__[j]org.robovm.apple.uikit.UITextField[allocator][clinit]
mov esi, eax
mov dword [ss:esp], ebx
call imp___jump_table__[j]org.robovm.apple.coregraphics.CGRect[allocator][clinit]
mov edi, eax
mov dword [ss:esp+0x4], edi
mov dword [ss:esp], ebx
mov dword [ss:esp+0xc], 0x40460000
...

基于字节码编译的好处是可以支持各种在 JVM 上构建的语言,比如 Scala、Kotlin、Clojure 等。

在运行环境上,它使用的 GC 和 GCJ 一样,都是 Boehm GC,这是一个保守 GC,会有内存泄露问题,尽管官方说已经优化过了影响不大。

在 UI 的支持方面,它和 Xamarin 挺像,可以直接用 Java 调用系统接口来创建界面(最近支持 Interface Builder 了),比如上面的示例就是。另外还号称能使用 JavaFX,这样就能在 iOS 和 Android 上使用同一套 UI 了,不过目前看起来很不靠谱。

在我看来 RoboVM 目前最大的用途就是使用 libGDX 开发游戏了,尽管在功能上远不如 Cocos2d-x(尤其是场景及对象管理),但不管怎么说用 Java 比 C++ 还是方便很多(别跟我说没人用 Java 做游戏,价值 25 亿美元的 Minecraft 就是),不过本文主要关心的是 UI 开发,所以这方面的话题就不深入讨论了,

RoboVM 和 Xamarin 很像,但 RoboVM 风险会小些,因为它只需要把 iOS 支持好就行了,对优先开发 Android 版本的团队挺适用,但目前官方文档太少了,而且不清楚 RoboVM 在 iOS 上的性能和稳定性怎样。

Swift – Apportable/Silver

apportable 可以直接将 Swift/Objective-C 编译为机器码,但它官网的成功案例全部都是游戏,所以用这个来做 APP 感觉很不靠谱。

所以后来它又推出了 Tengu 这个专门针对 APP 开发的工具,它的比起之前的方案更灵活些,本质上有点类似 C++ 公共库的方案,只不过语言变成了 Swift/Objective-C,使用 Swift/Objective-C 来编译生成跨平台的 SO 文件,提供给 Android 调用。

另一个类似的是 Silver,不过目前没正式发布,它不仅支持 Swift,还支持 C# 和自创的 Oxygene 语言(看起来像 Pascal),在界面方面它还有个跨平台非 UI 库 Sugar,然而目前 Star 数只有 17,太非主流了,所以实在懒得研究它。

使用 Swift 编译为 SO 给 Android 用虽然可行,但目前相关工具都不太成熟,所以不推荐使用。

Go

Go 是最近几年很火的后端服务开发语言,它语法简单且高性能,目前在国内有不少用户。

Go 从 1.4 版本开始支持开发 Android 应用(并将在 1.5 版本支持 iOS),不过前只能调用很少 的 API,比如 OpenGL 等,所以只能用来开发游戏,但我感觉并不靠谱,现在还有谁直接基于 OpenGL 开发游戏?大部分游戏都是基于某个框架的,而 Go 在这方面太缺乏了,我只看到一个桌面端 Azul3D,而且非常不成熟。

因为 Android 的 View 层完全是基于 Java 写的,要想用 Go 来写 UI 不可避免要调用 Java 代码,而这方面 Go 还没有简便的方式,目前 Go 调用外部代码只能使用 cgo,通过 cgo 再调用 jni,这需要写很多中间代码,所以目前 Go 1.4 采用的是类似 RPC 通讯的方式来做,从它源码中例子可以看出这种方式有多麻烦,性能肯定有不小的损失。

而且 cgo 的实现本身就对性能有损失,除了各种无关函数的调用,它还会锁定一个 Go 的系统线程,这会影响其它 gorountine 的运行,如果同时运行太多外部调用,甚至会导致所有 gorountine 等待。

这个问题的根源在于 Go 的栈是可以自动扩充的,这种方式有利于创建无数 gorountine,但却也导致了无法直接调用 C 编译后的函数,需要进行栈切换

所以使用 Go 开发跨平台移动端应用目前不靠谱。

话说 Rust 没有 Go 的性能,它调用 C 函数是没有性能损耗的,但目前 Rust 还没提供对 iOS/Android 的官方支持,尽管有人还是尝试过是可行的,但现在还不稳定,从 Rust 语言本身的设计来看,它挺适合取代 C++ 来做这种跨平台公共代码,但它的缺点是语法复杂,会吓跑很多开发者。

Xojo

我之前一直以为 BASIC 挂了,没想到还有这么一个特例,Xojo 使用的就是 BASIC,它有看起来很强大的 IDE,让人感觉像是在用 VisualBasic。

它的定位应该是给小朋友或业余开发者用的,因为似乎看起来学习成本低,但我不这么认为,因为用得人少,反而网上资料会很少,所以恐怕成本会更高。

因为时间关系,以及对 BASIC 无爱,我并没有怎么研究它。

小结

从目前分析的情况看,C++ 是比较稳妥的选择,但它对团队成员有要求,如果大家都没写过 C++,可以试试 Xamrin 或 RoboVM。

虚拟机流

除了编译为不同平台下的二进制文件,还有另一种常见做法是通过虚拟机来支持跨平台运行,比如 JavaScript 和 Lua 都是天生的内嵌语言,所以在这个流派中很多方案都使用了这两个语言。

不过虚拟机流会遇到两个问题:一个是性能损耗,另一个是虚拟机本身也会占不小的体积。

Java 系

说到跨平台虚拟机大家都会想到 Java,因为这个语言一开始就是为了跨平台设计的,Sun 的 J2ME 早在 1998 年就有了,在 iPhone 出来前的手机上,很多小游戏都是基于 J2ME 开发的,这个项目至今还活着,能运行在 Raspberry Pi 上。

前面提到微软提供了将 Objective-C 编译在 Windows Phone 上运行的工具,在对 Android 的支持上我没找到的详细资料,所以就暂时认为它是虚拟机的方式,从 Astoria 项目的介绍上看它做得非常完善,不仅能支持 NDK 中的 C++,还实现了 Java 的 debug 接口,使得可以直接用 Android Studio 等 IDE 来调试,整个开发体验和在 Android 手机上几乎没区别。

另外 BlackBerry 10 也是通过内嵌虚拟机来支持直接运行 Android 应用,不过据说比较卡。

不过前面提到 C# 和 Java 在 iOS 端的方案都是通过 AOT 的方式实现的,目前还没见到有 Java 虚拟机的方案,我想主要原因是 iOS 的限制,普通 app 不能调用 mmap、mprotect,所以无法使用 JIT 来优化性能,如果 iOS 开放,或许哪天有人开发一个像微软那样能直接在 iOS 上运行 Android 应用的虚拟机,就不需要跨平台开发了,大家只需要学 Android 开发就够了。。。

Titanium/Hyperloop

Titanium 应该不少人听过,它和 PhoneGap 几乎是同时期的著名跨平台方案,和 PhoneGap 最大的区别是:它的界面没有使用 HTML/CSS,而是自己设计了一套基于 XML 的 UI 框架 Alloy,代码类似下面这个样子:

app/styles/index.tss
".container": {
  backgroundColor:"white"
},
// This is applied to all Labels in the view
"Label": {
  width: Ti.UI.SIZE,
  height: Ti.UI.SIZE,
  color: "#000", // black
  transform: Alloy.Globals.rotateLeft // value is defined in the alloy.js file
},
// This is only applied to an element with the id attribute assigned to "label"
"#label": {
  color: "#999" /* gray */
}

app/views/index.xml
<Alloy>
  <Window class="container">
    <Label id="label" onClick="doClick">Hello, World</Label>
  </Window>
</Alloy>

前面我们说过由于 CSS 的过于灵活拖累了浏览器的性能,那是否自己建立一套 UI 机制会更靠谱呢?尽管这么做对性能确实有好处,然而它又带来了学习成本问题,做简单的界面问题不大,一旦要深入定制开发就会发现相关资料太少,所以还是不靠谱。

Titanium 还提供了一套跨平台的 API 来方便调用,这么做是它的优点更是缺点,尤其是下面三个问题:

  1. API 有限,因为这是由 Titanium 提供的,它肯定会比官方 API 少且有延迟,Titanium 是肯定跟不过来的
  2. 相关资料及社区有限,比起 Android/iOS 差远了,遇到问题都不知道去哪找答案
  3. 缺乏第三方库,第三方库肯定不会专门为 Titanium 提供一个版本,所以不管用什么都得自己封装

Titanium 也意识到了这个问题,所以目前在开发下一代的解决方案 Hyperloop,它可以将 JavaScript 编译为原生代码,这样的好处是调用原生 API 会比较方便,比如它的 iOS 是这样写的

@import("UIKit");
@import("CoreGraphics");
var view = new UIView();
view.frame = CGRectMake(0, 0, 100, 100);

这个方案和之前的说的 Xamarin 很相似,基本上等于将 Objective-C 翻译为 JavaScript 后的样子,意味着你可以对着 Apple 的官方文档开发,不过如果发现某些 Objective-C 语法发现不知道对应的 JavaScript 怎么写时就悲剧了,只有自己摸索。

但从 Github 上的提交历史看,这项目都快开发两年了,但至今仍然是试验阶段,从更新频率来看,最近一年只提交了 8 次,所以恐怕是要弃坑了,非常不靠谱。

因此我认为 Titanium/Hyperloop 都非常不靠谱,不推荐使用。

NativeScript

之前说到 Titanium 自定义 API 带来的各种问题,于是就有人换了个思路,比如前段时间推出的 NativeScript,它的方法说白了就是用工具来自动生成 wrapper API,和系统 API 保持一致。

有了这个自动生成 wrapper 的工具,它就能方便基于系统 API 来开发跨平台组件,以简单的 Button 为例,源码在 cross-platform-modules/ui/button 中,它在 Android 下是这样实现的(TypeScript 省略了很多代码)

export class Button extends common.Button {
    private _android: android.widget.Button;
    private _isPressed: boolean;

    public _createUI() {
        var that = new WeakRef(this);
        this._android = new android.widget.Button(this._context);
        this._android.setOnClickListener(new android.view.View.OnClickListener({
            get owner() {
                return that.get();
            },
            onClick: function (v) {
                if (this.owner) {
                    this.owner._emit(common.knownEvents.tap);
                }
            }
        }));
    }
}

而在 iOS 下是这样实现的(省略了很多代码)

export class Button extends common.Button {
    private _ios: UIButton;
    private _tapHandler: NSObject;
    private _stateChangedHandler: stateChanged.ControlStateChangeListener;

    constructor() {
        super();
        this._ios = UIButton.buttonWithType(UIButtonType.UIButtonTypeSystem);

        this._tapHandler = TapHandlerImpl.new().initWithOwner(this);
        this._ios.addTargetActionForControlEvents(this._tapHandler, "tap", UIControlEvents.UIControlEventTouchUpInside);

        this._stateChangedHandler = new stateChanged.ControlStateChangeListener(this._ios, (s: string) => {
            this._goToVisualState(s);
        });
    }

    get ios(): UIButton {
        return this._ios;
    }
}

可以看到用法和官方 SDK 中的调用方式是一样的,只不过语言换成了 JavaScript,并且写法看起来比较诡异罢了,风格类似前面的 Hyperloop 类似,所以也同样会有语法转换的问题。

这么做最大的好处就是能完整支持所有系统 API,对于第三方库也能很好支持,但它目前最大缺点是生成的文件体积过大,即便什么都不做,生成的 apk 文件也有 8.4 MB,因为它将所有 API binding 都生成了,而且这也导致在 Android 下首次打开速度很慢。

从底层实现上看,NativeScript 在 Android 下内嵌了 V8,而在 iOS 下内嵌了自己编译的 JavaScriptCore(这意味着没有 JIT 优化,具体原因前面提到了),这样的好处是能调用更底层的 API,也避免了不同操作系统版本下 JS 引擎不一致带来的问题,但后果是生成文件的体积变大和在 iOS 下性能不如 WKWebView。

WKWebView 是基于多进程实现的,它在 iOS 的白名单中,所以能支持 JIT。

它的使用体验很不错,做到了一键编译运行,而且还有 MVVM 的支持,能进行数据双向绑定

在我看来 NativeScript 和 Titanium 都有个很大的缺点,那就是排它性太强,如果你要用这两个方案,就得完整基于它们进行开发,不能在某些 View 下进行尝试,也不支持直接嵌入第三方 View,有没有方案能很好地解决这两个问题?有,那就是我们接下来要介绍的 React Native。

React Native

关于 React Native 目前网上有很多讨论了,知乎上也有不少回答,尽管有些回答从底层实现角度看并不准确,但大部分关键点倒是都提到了。

鉴于我不喜欢重复别人说过的话,这里就聊点别的。

React Native 的思路简单来说就是在不同平台下使用平台自带的 UI 组件,这个思路并不新奇,十几年前的 SWT 就是这么做的。

从团队上看,Facebook 的 iOS 团队中不少成员是来自 Apple 的,比如 Paper 团队的经理及其中不少成员都是,因为 iOS 不开源,所以从 Apple 中出来的开发者还是有优势的,比如前 Apple 开发者搞出来的 Duet 就秒杀了市面上所有其他方案,而且从 Facebook 在 iOS 上开源的项目看他们在 iOS 方面的经验和技术都不错,所以从团队角度看他们做出来的东西不会太差。

在做 React Native 方案的同时,其实 Facebook 还在做一个 Objective-C++ 上类似 React 的框架 ComponentKit,以下是它的代码示例:

@implementation ArticleComponent

+ (instancetype)newWithArticle:(ArticleModel *)article
{
  return [super newWithComponent:
          [CKStackLayoutComponent
           newWithView:{}
           size:{}
           style:{
             .direction = CKStackLayoutDirectionVertical,
           }
           children:{
             {[HeaderComponent newWithArticle:article]},
             {[MessageComponent newWithMessage:article.message]},
             {[FooterComponent newWithFooter:article.footer]},
           }];
}

@end

它的可读性比 JSX 中的 XML 差了不少,而且随着大家逐步接受 Swift,这种基于 Objective-C++ 的方案恐怕没几年就过时了,所以 Facebook 押宝 React 是比较正确的。

我看到有人说这是 Facebook 回归 H5,但其实 React Native 和 Web 扯不上太多关系,我所理解的 Web 是指 W3C 定义的那些规范,比如 HTML、CSS、DOM,而 React Native 主要是借鉴了 CSS 中的 Flexbox 写法,还有 navigator、XMLHttpRequest 等几个简单的 API,更别说完全没有 Web 的开放性,所以 React Native 和 HTML 5 完全不是一回事。

Facebook Groups 的 iOS 版本很大一部分基于 React Native 开发,其中用到了不少内部通过组件,比如 ReactGraphQL,这里我就八卦一下它,GraphQL 这是一个结构化数据查询的语法,就像 MongoDB 查询语法那样查询 JSON 数据,不过它并不是一种文档型数据库,而只是一个中间层,具体的数据源可以连其它数据库,它想取代的应该是 RESTful 那样的前后端简单 HTTP 协议,让前端更方便的获取数据,据说将会开源(看起来打算用 Node 实现)。

写文章拖时间太长的问题就是这期间会发生很多事情,比如 GraphQL 在我开始写的时候外界都不知道,所以需要八卦一下,结果现在官方已经宣布了,不过官方并没提到我说的那个 Node 实现,它目前还在悄悄开发阶段

React Native 的官方视频中说它能做到 App 内实时更新,其实这是 Apple 明文禁止的(App Store Review Guidelines 中的 2.7),要做得低调。

评论中有人提到 Apple 居然在 iOS 8.2 中改条款了,可以下载执行 JavaScript,而且 UIKit 的作者都觉得 React Native 很赞

我比较喜欢的是 React Native 中用到了 Flow,它支持定义函数参数的类型,极大提升了代码可读性,另外还能使用 ES6 的语法,比如 class 关键字等。

React Native 比传统 Objective-C 和 UIView 的学习成本低多了,熟悉 JavaScript 的开发者应该半天内就能写个使用标准 UI 的界面,而且用 XML+CSS 画界面也远比 UIView 中用 Frame 进行手工布局更易读(我没用过 Storyboards,它虽然看起来直观,但多人编辑很容易冲突),感兴趣可以抽空看看这个详细的入门教程,亲自动手试试就能体会到了,Command + R 更新代码感觉很神奇。

它目前已经有组件仓库了,而且在 github 上都有 500 多仓库了,其中有 sqlite、Camera 等原生组件,随着这些第三方组件的完善,基于 React Native 开发越来越不需要写原生代码了。

不过坏消息是 React Native 的 Android 版本还要等半年,这可以理解,因为在 Android 上问题要复杂得多,有 Dalvik/ART 拦在中间,使得交互起来很麻烦。

NativeScript 和 React Native 在侧重点上有很大的不同,使得这两个产品目前走向了不同的方向:

  • React Native 要解决的是开发效率问题,它并没指望完全取代 Native 开发,它的 rootView 继承自 UIView,所以可以在部分 View 是使用,很方便混着,不需要重写整个 app,而且混用的时候还需要显示地将 API 暴露给 JavaScript
  • NativeScript 则像是 Titanium 那样企图完全使用 JavaScript 开发,将所有系统 API 都暴露给了 JavaScript,让 JavaScript 语言默认就拥有 Native 语言的各种能力,然后再次基础上来开发

方向的不同导致这两个产品将会有不同的结局,我认为 React Native 肯定会完胜 NativeScript,因为它的使用风险要小很多,你可以随时将部分 View 使用 React Native 来试验,遇到问题就改回 Native 实现,风险可控,而用 NativeScript 就不行了,这导致大家在技术选型的时候不敢使用 NativeScript。

话说 Angular 团队看到 React Native 后表示不淡定了,于是开始重新设计 Angular 2 的展现架构,将现有的 Render 层独立出来,以便于做到像 React 那样适应不同的运行环境,可以运行在 NativeScript 上。

综合来看,我觉得 React Native 很值得尝试,而且风险也不高。

游戏引擎中的脚本

游戏引擎大多都能跨平台,为了提升开发效率,不少引擎还内嵌了对脚本支持,比如:

  • Ejecta,它实现了 Canvas 及 Audio 的 API,可以开发简单的游戏,但目前还不支持 Android
  • CocoonJS,实现了 WebGL 的 API,可以运行 Three.js 写的游戏
  • Unreal Engine 3,可以使用 UnrealScript 来开发,这个语言的语法很像 Java
  • Cocos2d-js,Cocos2d-x 的 JavaScript binding,它内部使用的 JS 引擎是 SpiderMonkey
  • Unity 3D,可以使用 C# 或 JavaScript 开发游戏逻辑
  • Corona,使用 Lua 来开发

目前这种方式只有 Unity 3D 发展比较好,Cocos2d-JS 据说还行,有些小游戏在使用,Corona 感觉比较非主流,虽然它也支持简单的按钮等界面元素,但用来写 APP 我不看好,因为不开源所以没研究,目前看来最大的好处似乎是虚拟机体积小,内嵌版本官方号称只有 1.4M,这是 Lua 引擎比较大的优势。

而剩下的 3 个都基本上挂了,Ejecta 至今还不支持 Android,CocoonJS 转型为类似 Crosswalk 的 WebView 方案,而 Unreal Engine 4 开始不再支持 UnrealScript,而是转向了使用 C++ 开发,感兴趣可以围观一下 Epic 创始人解释为什么要这么做

当然,这些游戏引擎都不适合用来做 APP,一方面是会遇到前面提到的界面绘制问题,另一方面游戏引擎的实现一般都要不断重绘,这肯定比普通 App 更耗电,很容易被用户发现后怒删。

Adobe AIR

从我周边了解到的情况看,几乎所有人都以为 Flash 彻底放弃移动端了,不得不说 Adobe 的宣传真是失策,明明只是放弃移动浏览器端插件,Flash 还是可以在 iOS 下运行的好不好,那就是 Adobe AIR,对于熟悉 ActionScript 的团队来说,这是一种挺好的跨平台游戏开发解决方案,国内游戏公司之前有用,现在还有没人用我就不知道了,不过考虑到很多不明真相的小朋友都以为 Flash 在移动端挂了,所以后备力量肯定严重不足,连人都招不到,其它就别想了。

评论中有人指出在 iOS 下是通过编译实现的,看来和 Xamarin RoboVM 很类似。

但开发 APP 方面,它同样缺乏好的 UI 库,Flex 使用体验很差,目前基本上算挂了,目前只有 Feathers 还算能看,不过主要是给游戏中的 UI 设计的,并不适合用来开发 APP。

Dart

Dart 在 Web 基本上失败了,于是开始转战移动开发,目前有两个思路,一个是类似 Lua 那样的嵌入语言来统一公共代码,但因为 Dart 虚拟机源自 V8,在一开始设计的时候就只有 JIT 而没有解释器,甚至连字节码都没有,所以它无法在 iOS 下运行,于是 Dart 团队又做了个小巧的虚拟机 Fletch,它基于传统的字节码解释执行方式来运行,目前代码只有 1w 多行,和 Lua 一样轻量级。

另一个就是最近比较热门的 Sky,这里吐槽一下国内外的媒体,我看到的报道都是说 Google 想要用 Dart 取代 Android 下的 Java 开发。。。这个东东确实是 Google 的 Chrome 团队开发的,但 Google 是一个很大的公司好不好,内部有无数小团队,某个小团队并不能代表个 Google,如果真是 Google 高层的决定,它将会在 Google I/O 大会主题演讲上推出来,而不是 Dart Developer Summit 这样非主流的技术分享。

有报道称 Sky 只支持在线应用,不支持离线,这错得太离谱了,人家只是为了演示它的在线更新能力,你要想将代码内嵌到 app 里当然是可以的。

Sky 的架构如下图所示,它参考了 Chrome,依靠一个消息系统来和本地环境进行通讯,使得 Dart 的代码和平台无关,可以运行在各种平台上。

如果你读过前面的文章,那你一定和我一样非常关心一个问题:Sky 的 UI 是怎么绘制出来的?使用系统还是自己画?一开始看 Sky 介绍视频的时候,我还以为它底层绘制基于 Chrome,因为这个视频的演讲者是 Eric Seidel,他是 WebKit 项目中非常有名的开发者,早年在 Apple 开发 WebKit,2008 年跳槽去了 Chrome 团队,但他在演讲中并没有提到 WebView,而且演示的时候界面非常像原生 Material Design 效果(比如点击有涟漪效果),所以我又觉得它是类似 React Native 那样使用原生 UI。

然而当我下载那个应用分析后发现,它既没使用 Chrome/WebView 也没使用原生 UI 组件,难不成是自己绘制的?

从 Sky SDK 的代码上看,它其中有非常多 Web 的痕迹,比如支持标准的 CSS、很多 DOM API,但它编译后的体积非常小,libsky_shell.so 只有 8.7 MB,我之前尝试精简过 Chrome 内核,将 WebRTC 等周边功能删掉也要 22 MB,这么小的体积肯定要删 Web 核心功能,比如 SVG 和部分 CSS3,所以我怀疑它实现了简版的 Chrome 内核渲染。

后来无意间看了一下 Mojo 的代码,才证实确实如此,原来前面那张图中介绍的 Mojo 其实并不完整,Mojo 不仅仅是一个消息系统,它是一个简版的 Chrome 内核!使用 cloc 统计代码就暴露了:

   12508 text files.
   11973 unique files.
    2299 files ignored.
-----------------------------------------------------------
Language              files     blank   comment      code
-----------------------------------------------------------
C++                    3485    129830    107745    689089
C/C++ Header           3569     92435    125742    417655
C                       266     37462     63659    269220
...

C++ 不包含注释的代码部分就有近 70w 行啊,而且一看目录结构就是浓浓的 Chromium 风格,至少从技术难度来说绝对秒掉前面所有方案,也印证了我前面说过如果有简化版 CSS/HTML 就能很好解决性能问题。

这也让我理解了为什么 Eric 在谈到 Mojo 的时候语焉不详,让人误以为仅仅是一个消息系统,他要是明确说这是一个精简版 Chrome,那得引起多大的误会啊,没准会有小编用「Google 宣布开发下一代浏览器内核取代 Blink」这样的标题了。

之前 Dart 决定不将 Dart VM 放到 Chrome 里,原来并不是因为被众人反对而死心了,而是因为 fork 了一个 Chrome 自己拿来玩了。

综合来看,目前 Dart 的这两个方案都非常不成熟,Sky 虽然在技术上看很强大,但 Dart 语言目前接受度非常低,比起它所带来的跨平台优点,它的缺点更大,比如无法使用第三方 Native UI 库,也无法使用第三方 Web UI 库,这导致它的社区会非常难发展,命中注定非主流,真可惜了这帮技术大牛,但方向比努力更重要,希望他们能尽早醒悟,让 Sky 也支持 JavaScript。

我的结论

看到这里估计不少读者晕了,有那么多种方案,最后到底哪个最适合自己?该学哪个?这里简单说说我的看法。

如果你只会 JavaScript,那目前最好的方案是 React Native,有了它你即使不了解 Native 开发也能写出很多中小应用,反正多半不会火,花太多精力也没意义,等万一火了再学 Native 开发也不迟啊。

如果你只会 Java,那可以尝试 RoboVM 或 j2objc,j2objc 虽然目前更稳定靠谱,但它不能像 RoboVM 那样完全用 Java 开发,所以你还得学 Objective-C 来写界面,而 RoboVM 的缺点就是貌似还不太稳定,而且似乎除了游戏以外还没见到比较知名的应用使用,而它这种方案注定会比 j2objc 更容易出问题,所以你得做好踩坑的心理准备。

如果你只会 C#,那唯一的选择就是 Xamarin 了。

如果你只会 Objective-C,很杯具目前没有比较靠谱的方案,我建议你还是学学 Java 吧,多学一门语言没啥坏处。

如果你只会 C++,可以做做游戏或非 UI 的公共部分,我不建议使用 QT 或自己画界面,还是学学 Native 开发吧。

如果你只会 Go,还别指望用它开发移动端,因为目前的实现很低效,而且这和 Go 底层的实现机制密切相关,导致很难优化,所以预计很长一段时间内也不会有改观。

如果你会 Rust,说明你很喜欢折腾,多半也会前面所有语言,自己做决定吧。。。

当然,上面都是针对个人的,对于团队来说,那不用想了,肯定用 Native,然后混用内嵌的方案,比如 Lua、React Native,前面那些排它的方案(比如 Titanium)千万别选,会死很惨。

本文涉及到的技术点很多,有什么不准确的地方欢迎提出,另外可以关注我的微博 weibo.com/nwind 进行交流。

P.S. 本文说的是移动端,很多人觉得跨平台从来都不靠谱,但其实是有的,那就是 Web,这个历史上最成功的例子,太成功以致于大家都习以为常了,大树之下,寸草不生,它挤掉了其它方案的生存空间,十几年前还有 B/S 和 C/S 之争呢。


Android RxJava使用介绍(一) Hello World

$
0
0

Android RxJava使用介绍(一) Hello World

分类: Android 860人阅读 评论(4) 收藏 举报

最近在做东西的时候,一直在使用RxJava框架,越是深入了解RxJava,就越觉得这个框架威力实在是太大了。好东西不能一个人独自享受,后面几篇文章我会由浅入深来介绍一下RxJava的使用方法,相信看完之后,你会跟我一样逐渐喜欢上这个“威力无比”的武器!

那么,RxJava到底是什么?使用RxJava到底有什么好处呢?其实RxJava是ReactiveX中使用Java语言实现的版本,目前ReactiveX已经实现的语言版本有:

  • Java: RxJava
  • JavaScript: RxJS
  • C#: Rx.NET
  • C#(Unity): UniRx
  • Scala: RxScala
  • Clojure: RxClojure
  • C++: RxCpp
  • Ruby: Rx.rb
  • Python: RxPY
  • Groovy: RxGroovy
  • JRuby:RxJRuby
  • Kotlin: RxKotlin

可以看出ReactiveX在开发应用中如此的火爆。那到底什么是ReactiveX呢?简单来说,ReactiveX就是”观察者模式+迭代器模式+函数式编程”,它扩展了观察者模式,通过使用可观察的对象序列流来表述一系列事件,订阅者进行占点观察并对序列流做出反应(或持久化或输出显示等等);借鉴迭代器模式,对多个对象序列进行迭代输出,订阅者可以依次处理不同的对象序列;使用函数式编程思想(functional programming),极大简化问题解决的步骤。

RxJava的基本概念

RxJava最核心的两个东西就是Observables(被观察者,也就是事件源)和Subscribers(观察者),由Observables发出一系列的事件,Subscribers进行订阅接收并进行处理,看起来就好像是设计模式中的观察者模式,但是跟观察者模式不同的地方就在于,如果没有观察者(即Subscribers),Observables是不会发出任何事件的

由于Observables发出的事件并不仅限于一个,有可能是多个的,如何确保每一个事件都能发送到Subscribers上进行处理呢?这里就借鉴了设计模式的迭代器模式,对事件进行迭代轮询(next()、hasNext()),在迭代过程中如果出现异常则直接抛出(throws Exceptions),下表是Observable和迭代器(Iterable)的对比:

事件(event) 迭代器(Iterable) Observable
接收数据 T next() onNext(T)
发现错误 throws Exception onError(Exception)
迭代完成 !hasNext() onCompleted()

与迭代器模式不同的地方在于,迭代器模式在事件处理上采用的是“同步/拉式”的方式,而Observable采用的是“异步/推式”的方式,对于Subscriber(观察者)而言,这种方式会更加灵活。

开始准备 Hello World!

说了那么多概念性的东西,可能大家会一头雾水,下面我们就使用获取天气预报的例子来说明吧。

准备工作

  1. 获取天气预报,我们就使用新浪提供的API接口吧,地址如下:
    http://php.weather.sina.com.cn/xml.php?city=%B1%B1%BE%A9&password=DJOYnieT8234jlsK&day=0
    其中,city后的城市转码。
    Password固定
    Day为0表示当天天气,1表示第二天的天气,2表示第三天的天气,以此类推,最大为4
  2. 为了简化代码,使用Retrolamda框架(有时间后面会专门写文章介绍),需要安装JDK8,并且环境变量中需要增加“JAVA8_HOME”变量,如图:
    这里写图片描述
  3. Android Studio版本就用最新的1.2版本+Gradle1.0.0吧。使用Eclipse ADT的朋友,建议赶紧换成Android Studio吧,在android开发上,Android Studio比Eclipse ADT实在是不可同日而语。

环境搭建

首先在Android Studio中新建一个项目,然后修改Project级的build.gradle如下:

buildscript {
    repositories {
        jcenter()
    }
    dependencies {
        classpath 'com.android.tools.build:gradle:1.0.0'
        classpath 'me.tatarka:gradle-retrolambda:3.0.1'
    }
}

allprojects {
    repositories {
        jcenter()
    }
}

module级的build.gradle修改如下:

apply plugin: 'com.android.application'
apply plugin: 'me.tatarka.retrolambda'

retrolambda {
    jdk System.getenv("JAVA8_HOME")
    oldJdk System.getenv("JAVA6_HOME")
    javaVersion JavaVersion.VERSION_1_6
}

android {
    compileSdkVersion 21
    buildToolsVersion "21.1.2"

    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }

    defaultConfig {
        applicationId "com.example.hesc.weather"
        minSdkVersion 10
        targetSdkVersion 21
        versionCode 1
        versionName "1.0"
    }
    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        }
    }
}

dependencies {
    compile fileTree(dir: 'libs', include: ['*.jar'])
    compile 'com.android.support:appcompat-v7:22.0.0'
    compile 'io.reactivex:rxandroid:0.24.0'
}

tasks.withType(JavaCompile){
    options.encoding="utf-8"
}

开发代码

首先新建布局文件activity_main.xml如下:

<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:orientation="vertical">

    <LinearLayout
        android:layout_width="match_parent"
        android:layout_height="50dp"
        android:orientation="horizontal"
        android:background="#FF0000">

        <EditText android:id="@+id/city"
            android:layout_width="0dp"
            android:layout_weight="1"
            android:layout_marginTop="8dp"
            android:layout_marginBottom="8dp"
            android:layout_marginLeft="10dp"
            android:layout_marginRight="10dp"
            android:paddingLeft="15dp"
            android:paddingRight="15dp"
            android:gravity="center_vertical"
            android:layout_height="match_parent"
            android:hint="请输入城市"
            android:background="@drawable/edit_bg"/>
        <TextView android:id="@+id/query"
            android:layout_width="80dp"
            android:layout_height="match_parent"
            android:text="查询"
            android:gravity="center"
            android:textColor="#FFFFFF"
            android:background="@drawable/button_bg"
            android:layout_gravity="center_vertical"/>
    </LinearLayout>

    <TextView android:id="@+id/weather"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:layout_margin="10dp"/>
</LinearLayout>

布局比较简单,就是一个输入城市的EditText+查询按钮+显示天气情况的TextView,相信朋友们都能看懂哈。

打开MainActivity,在onCreate方法中添加代码:

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        //获取控件实例
        cityET = (EditText) findViewById(R.id.city);
        queryTV = (TextView) findViewById(R.id.query);
        weatherTV = (TextView) findViewById(R.id.weather);
        //对查询按钮侦听点击事件
        queryTV.setOnClickListener(this);
        weatherTV.setOnTouchListener(this);

    }

代码比较简单,不做过多解析。下面进入重点:通过网络连接获取天气预报,本案例是通过使用新浪提供的API来获取的,首先声明静态变量如下:

/**
     * 天气预报API地址
     */
    private static final String WEATHRE_API_URL="http://php.weather.sina.com.cn/xml.php?city=%s&password=DJOYnieT8234jlsK&day=0";

然后通过开HttpURLConnection连接获取天气预报,如下:

/**
     * 获取指定城市的天气情况
     * @param city
     * @return
     * @throws
     */
    private String getWeather(String city) throws Exception{
        BufferedReader reader = null;
        HttpURLConnection connection=null;
        try {
            String urlString = String.format(WEATHRE_API_URL, URLEncoder.encode(city, "GBK"));
            URL url = new URL(urlString);
            connection = (HttpURLConnection) url.openConnection();
            connection.setRequestMethod("GET");
            connection.setReadTimeout(5000);
            //连接
            connection.connect();

            //处理返回结果
            reader = new BufferedReader(new InputStreamReader(connection.getInputStream(), "utf-8"));
            StringBuffer buffer = new StringBuffer();
            String line="";
            while(!TextUtils.isEmpty(line = reader.readLine()))
                buffer.append(line);
            return buffer.toString();
        } finally {
            if(connection != null){
                connection.disconnect();
            }
            if(reader != null){
                try {
                    reader.close();
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
        }
    }

代码也比较简单,就是通过打开HttpURLConnection连接,根据城市名通过GET方式获取查询结果,由于使用了网络连接,别忘了在AndroidManfest.xml中申请使用网络连接权限:

    <!--申请网络访问权限-->
    <uses-permission android:name="android.permission.INTERNET"/>

通过网络连接请求返回的结果是xml文件,需要对xml进行解析,我们先创建一个描述天气情况的bean类,如下:

/**
     * 天气情况类
     */
    private class Weather{
        /**
         * 城市
         */
        String city;
        /**
         * 日期
         */
        String date;
        /**
         * 温度
         */
        String temperature;
        /**
         * 风向
         */
        String direction;
        /**
         * 风力
         */
        String power;
        /**
         * 天气状况
         */
        String status;

        @Override
        public String toString() {
            StringBuilder builder = new StringBuilder();
            builder.append("城市:" + city + "\r\n");
            builder.append("日期:" + date + "\r\n");
            builder.append("天气状况:" + status + "\r\n");
            builder.append("温度:" + temperature + "\r\n");
            builder.append("风向:" + direction + "\r\n");
            builder.append("风力:" + power + "\r\n");
            return builder.toString();
        }
    }

然后我们使用Pull的方式解析xml,代码如下:

/**
     * 解析xml获取天气情况
     * @param weatherXml
     * @return
     */
    private Weather parseWeather(String weatherXml){
        //采用Pull方式解析xml
        StringReader reader = new StringReader(weatherXml);
        XmlPullParser xmlParser = Xml.newPullParser();
        Weather weather = null;
        try {
            xmlParser.setInput(reader);
            int eventType = xmlParser.getEventType();
            while(eventType != XmlPullParser.END_DOCUMENT){
                switch (eventType){
                    case XmlPullParser.START_DOCUMENT:
                        weather = new Weather();
                        break;
                    case XmlPullParser.START_TAG:
                        String nodeName = xmlParser.getName();
                        if("city".equals(nodeName)){
                            weather.city = xmlParser.nextText();
                        } else if("savedate_weather".equals(nodeName)){
                            weather.date = xmlParser.nextText();
                        } else if("temperature1".equals(nodeName)) {
                            weather.temperature = xmlParser.nextText();
                        } else if("temperature2".equals(nodeName)){
                            weather.temperature += "-" + xmlParser.nextText();
                        } else if("direction1".equals(nodeName)){
                            weather.direction = xmlParser.nextText();
                        } else if("power1".equals(nodeName)){
                            weather.power = xmlParser.nextText();
                        } else if("status1".equals(nodeName)){
                            weather.status = xmlParser.nextText();
                        }
                        break;
                }
                eventType = xmlParser.next();
            }
            return weather;
        } catch(Exception e) {
            e.printStackTrace();
            return null;
        } finally {
            reader.close();
        }
    }

到现在为止,我们已经完成了网络连接获取天气预报的xml,并对xml进行了解析成weather类,其实已经完成了大部分的工作,接下来就是对这几部分工作进行整合,这里就有以下两个问题需要注意的:

  • 开网络连接必须开单独的线程进行处理,否则在4.x以上版本就会报错
  • 对返回的查询结果需要显示到控件上,必须在UI线程中进行

解决这两个问题的方式有很多种办法,最常用的就是AsyncTask或者就直接是Thread+Handler的方式,其实不管哪种方式,我觉得都没有RxJava那样写起来优雅,不信,你看:

/**
     * 采用普通写法创建Observable
     * @param city
     */
    private void observableAsNormal(String city){
        subscription = Observable.create(new Observable.OnSubscribe<Weather>() {
            @Override
            public void call(Subscriber<? super Weather> subscriber) {
                //1.如果已经取消订阅,则直接退出
                if(subscriber.isUnsubscribed()) return;
                try {
                    //2.开网络连接请求获取天气预报,返回结果是xml格式
                    String weatherXml = getWeather(city);
                    //3.解析xml格式,返回weather实例
                    Weather weather = parseWeather(weatherXml);
                    //4.发布事件通知订阅者
                    subscriber.onNext(weather);
                    //5.事件通知完成
                    subscriber.onCompleted();
                } catch(Exception e){
                    //6.出现异常,通知订阅者
                    subscriber.onError(e);
                }
            }
        }).subscribeOn(Schedulers.newThread())    //让Observable运行在新线程中
                .observeOn(AndroidSchedulers.mainThread())   //让subscriber运行在主线程中
                .subscribe(new Subscriber<Weather>() {
                    @Override
                    public void onCompleted() {
                        //对应上面的第5点:subscriber.onCompleted();
                        //这里写事件发布完成后的处理逻辑

                    }

                    @Override
                    public void onError(Throwable e) {
                        //对应上面的第6点:subscriber.onError(e);
                        //这里写出现异常后的处理逻辑
                        Toast.makeText(MainActivity.this, e.getMessage(), Toast.LENGTH_SHORT).show();
                    }

                    @Override
                    public void onNext(Weather weather) {
                        //对应上面的第4点:subscriber.onNext(weather);
                        //这里写获取到某一个事件通知后的处理逻辑
                        if(weather != null)
                            weatherTV.setText(weather.toString());
                    }
                });
    }

RxJava由于使用了多个回调,一开始理解起来可能有点难度,其实多看几遍也就明白了,它的招式套路都是一样的:

  1. 首先就是创建Observable,创建Observable有很多种方式,这里使用了Observable.create的方式;Observable.create()需要传入一个参数,这个参数其实是一个回调接口,在这个接口方法里我们处理开网络请求和解析xml的工作,并在最后通过onNext()、onCompleted()和onError()通知Subscriber(订阅者);
  2. 然后就是调用Observable.subscribe()方法对Observable进行订阅。这里要注意,如果不调用Observable.subscribe()方法,刚才在Observable.create()处理的网络请求和解析xml的代码是不会执行的,这也就解释了本文开头所说的“如果没有观察者(即Subscribers),Observables是不会发出任何事件的”
  3. 说了那么多,好像也没有开线程处理网络请求啊,这样不会报错吗?别急,认真看上面的代码,我还写了两个方法subscribeOn(Schedulers.newThread())和observeOn(AndroidSchedulers.mainThread()),没错,奥妙就在于此:
    3.1 subscribeOn(Schedulers.newThread())表示开一个新线程处理Observable.create()方法里的逻辑,也就是处理网络请求和解析xml工作
    3.2 observeOn(AndroidSchedulers.mainThread())表示subscriber所运行的线程是在UI线程上,也就是更新控件的操作是在UI线程上
    3.3 如果这里只有subscribeOn()方法而没有observeOn()方法,那么Observable.create()和subscriber()都是运行在subscribeOn()所指定的线程中;
    3.4 如果这里只有observeOn()方法而没有subscribeOn()方法,那么Observable.create()运行在主线程(UI线程)中,而subscriber()是运行在observeOn()所指定的线程中(本例的observeOn()恰好是指定主线程而已)

上面的代码由于使用了多个接口回调,代码看起来并不是那么完美,采用lambda的写法,看起来会更加简洁和优雅,不信,你看:

/**
     * 采用lambda写法创建Observable
     * @param city
     */
    private void observableAsLambda(String city){
        subscription = Observable.create(subscriber->{
                    if(subscriber.isUnsubscribed()) return;
                    try {
                        String weatherXml = getWeather(city);
                        Weather weather = parseWeather(weatherXml);
                        subscriber.onNext(weather);
                        subscriber.onCompleted();
                    } catch(Exception e){
                        subscriber.onError(e);
                    }
                }
        ).subscribeOn(Schedulers.newThread())    //让Observable运行在新线程中
                .observeOn(AndroidSchedulers.mainThread())   //让subscriber运行在主线程中
                .subscribe(
                        weather->{
                            if(weather != null)
                                weatherTV.setText(weather.toString());
                        },
                        e->{
                            Toast.makeText(MainActivity.this, e.getMessage(), Toast.LENGTH_SHORT).show();
                        });
    }

最后一步,就是点击查询按钮时触发上面的代码逻辑:

@Override
    public void onClick(View v) {
        if(v.getId() == R.id.query){
            weatherTV.setText("");
            String city = cityET.getText().toString();
            if(TextUtils.isEmpty(city)){
                Toast.makeText(this, "城市不能为空!", Toast.LENGTH_SHORT).show();
                return;
            }
            //采用普通写法创建Observable
            observableAsNormal(city);
            //采用lambda写法创建Observable
//            observableAsLambda(city);
        }
    }

通过上面的例子,相信大家已经对RxJava有了整体认识,最后献上代码和效果图:

这里写图片描述

源代码下载



RxJava 的使用入门

$
0
0

RxJava 的使用入门

2015-04-26 19:03 by HalZhang, 91 阅读, 0 评论, 收藏, 编辑

一、什么是 RxJava?

RxJava 是一个响应式编程框架,采用观察者设计模式。所以自然少不了 Observable 和 Subscriber 这两个东东了。

RxJava 是一个开源项目,地址:https://github.com/ReactiveX/RxJava

还有一个RxAndroid,用于 Android 开发,添加了 Android 用的接口。地址:https://github.com/ReactiveX/RxAndroid

二、例子

通过请求openweathermap 的天气查询接口返回天气数据

1、增加编译依赖

复制代码
1 dependencies {
2     compile fileTree(dir: 'libs', include: ['*.jar'])
3     compile 'com.android.support:appcompat-v7:22.0.0'
4     compile 'io.reactivex:rxjava:1.0.9'
5     compile 'io.reactivex:rxandroid:0.24.0'
6     compile 'com.squareup.retrofit:retrofit:1.9.0'
7 }
复制代码

retrofit 是一个 restful 请求客户端。详见:http://square.github.io/retrofit/

2、服务器接口

复制代码
 1 /**
 2  * 接口
 3  * Created by Hal on 15/4/26.
 4  */
 5 public class ApiManager {
 6
 7     private static final String ENDPOINT = "http://api.openweathermap.org/data/2.5";
 8
 9     /**
10      * 服务接口
11      */
12     private interface ApiManagerService {
13         @GET("/weather")
14         WeatherData getWeather(@Query("q") String place, @Query("units") String units);
15     }
16
17     private static final RestAdapter restAdapter = new RestAdapter.Builder().setEndpoint(ENDPOINT).setLogLevel(RestAdapter.LogLevel.FULL).build();
18
19     private static final ApiManagerService apiManager = restAdapter.create(ApiManagerService.class);
20
21     /**
22      * 将服务接口返回的数据,封装成{@link rx.Observable}
23      * @param city
24      * @return
25      */
26     public static Observable<WeatherData> getWeatherData(final String city) {
27         return Observable.create(new Observable.OnSubscribe<WeatherData>() {
28             @Override
29             public void call(Subscriber<? super WeatherData> subscriber) {
30                 //订阅者回调 onNext 和 onCompleted
31                 subscriber.onNext(apiManager.getWeather(city, "metric"));
32                 subscriber.onCompleted();
33             }
34         }).subscribeOn(Schedulers.io());
35     }
36 }
复制代码

订阅者的回调有三个方法,onNext,onError,onCompleted

3、接口调用

复制代码
 1   /**
 2          * 多个 city 请求
 3          * map,flatMap 对 Observable进行变换
 4          */
 5         Observable.from(CITIES).flatMap(new Func1<String, Observable<WeatherData>>() {
 6             @Override
 7             public Observable<WeatherData> call(String s) {
 8                 return ApiManager.getWeatherData(s);
 9             }
10         }).subscribeOn(Schedulers.io())
11                 .observeOn(AndroidSchedulers.mainThread())
12                 .subscribe(/*onNext*/new Action1<WeatherData>() {
13                     @Override
14                     public void call(WeatherData weatherData) {
15                         Log.d(LOG_TAG, weatherData.toString());
16                     }
17                 }, /*onError*/new Action1<Throwable>() {
18                     @Override
19                     public void call(Throwable throwable) {
20
21                     }
22                 });
23
24         /**
25          * 单个 city 请求
26          */
27         ApiManager.getWeatherData(CITIES[0]).subscribeOn(Schedulers.io())
28                 .observeOn(AndroidSchedulers.mainThread())
29                 .subscribe(new Action1<WeatherData>() {
30                     @Override
31                     public void call(WeatherData weatherData) {
32                         Log.d(LOG_TAG, weatherData.toString());
33                         ((TextView) findViewById(R.id.text)).setText(weatherData.toString());
34                     }
35                 }, new Action1<Throwable>() {
36                     @Override
37                     public void call(Throwable throwable) {
38                         Log.e(LOG_TAG, throwable.getMessage(), throwable);
39                     }
40                 });
41
42         /**
43          * Android View 事件处理
44          */
45         ViewObservable.clicks(findViewById(R.id.text), false).subscribe(new Action1<OnClickEvent>() {
46             @Override
47             public void call(OnClickEvent onClickEvent) {
48
49             }
50         });
复制代码

 subscribeOn(Schedulers.io())observeOn(AndroidSchedulers.mainThread())分别定义了这两个动作的线程。Android UI 更新需要在主线程。

4、retrofit 支持 rxjava 整合

复制代码
 1 /**
 2      * 服务接口
 3      */
 4     private interface ApiManagerService {
 5         @GET("/weather")
 6         WeatherData getWeather(@Query("q") String place, @Query("units") String units);
 7
 8         /**
 9          * retrofit 支持 rxjava 整合
10          * 这种方法适用于新接口
11          */
12         @GET("/weather")
13         Observable<WeatherData> getWeatherData(@Query("q") String place, @Query("units") String units);
14     }
复制代码

 Demo 代码

 


那些年我们错过的响应式编程

$
0
0

那些年我们错过的响应式编程


相信你们在学习响应式编程这个新技术的时候都会充满了好奇,特别是它的一些变体,例如:Rx系列、Bacon.js、RAC等等……

在缺乏优秀资料的前提下,响应式编程的学习过程将满是荆棘。起初,我试图寻找一些教程,却只找到少量的实践指南,而且它们讲的都非常浅显,从来没人接受围绕响应式编程建立一个完整知识体系的挑战。此外,官方文档通常也不能很好地帮助你理解某些函数,因为它们通常看起来很绕,不信请看这里:

Rx.Observable.prototype.flatMapLatest(selector, [thisArg])

根据元素下标,将可观察序列中每个元素一一映射到一个新的可观察序列当中,然后…%…………%&¥#@@……&**(晕了)

天呐,这简直太绕了!

我读过两本相关的书,一本只是在给你描绘响应式编程的伟大景象,而另一本却只是深入到如何使用响应式库而已。我在不断的构建项目过程中把响应式编程了解的透彻了一些,最后以这种艰难的方式学完了响应式编程。在我工作公司的一个实际项目中我会用到它,当我遇到问题时,还可以得到同事的支持。

学习过程中最难的部分是如何以响应式的方式来思考,更多的意味着要摒弃那些老旧的命令式和状态式的典型编程习惯,并且强迫自己的大脑以不同的范式来运作。我还没有在网络上找到任何一个教程是从这个层面来剖析的,我觉得这个世界非常值得拥有一个优秀的实践教程来教你如何以响应式编程的方式来思考,方便引导你开始学习响应式编程。然后看各种库文档才可以给你更多的指引。希望这篇文章能够帮助你快速地进入响应式编程的世界。

“什是响应式编程?”

网络上有一大堆糟糕的解释和定义,如Wikipedia上通常都是些非常笼统和理论性的解释,而Stackoverflow上的一些规范的回答显然也不适合新手来参考,Reactive Manifesto看起来也只像是拿给你的PM或者老板看的东西,微软的Rx术语“Rx = Observables + LINQ + Schedulers” 也显得太过沉重,而且充满了太多微软式的东西,反而给我们带来更多疑惑。相对于你使用的MV*框架以及你钟爱的编程语言,”Reactive”和”Propagation of change”这样的术语并没有传达任何有意义的概念。当然,我的view框架能够从model做出反应,我的改变当然也会传播,如果没有这些,我的界面根本就没有东西可渲染。

所以,不要再扯这些废话了。

响应式编程就是与异步数据流交互的编程范式

一方面,这已经不是什么新事物了。事件总线(Event Buses)或一些典型的点击事件本质上就是一个异步事件流(asynchronous event stream),这样你就可以观察它的变化并使其做出一些反应(do some side effects)。响应式是这样的一个思路:除了点击和悬停(hover)的事件外,你可以给任何事物创建数据流。数据流无处不在,任何东西都可以成为一个数据流,例如变量、用户输入、属性、缓存、数据结构等等。举个栗子,你可以把你的微博订阅功能想象成跟点击事件一样的数据流,你可以监听这样的数据流,并做出相应的反应。

最重要的是,你会拥有一些令人惊艳的函数去结合、创建和过滤任何一组数据流。 这就是”函数式编程”的魔力所在。一个数据流可以作为另一个数据流的输入,甚至多个数据流也可以作为另一个数据流的输入。你可以合并两个数据流,也可以过滤一个数据流得到另一个只包含你感兴趣的事件的数据流,还可以映射一个数据流的值到一个新的数据流里。

数据流是整个响应式编程体系中的核心,要想学习响应式编程,当然要先走进数据流一探究竟了。那现在就让我们先从熟悉的”点击一个按钮”的事件流开始

Click event stream

一个数据流是一个按时间排序的即将发生的事件(Ongoing events ordered in time)的序列。如上图,它可以发出3种不同的事件(上一句已经把它们叫做事件):一个某种类型的值事件,一个错误事件和一个完成事件。当一个完成事件发生时,在某些情况下,我们可能会做这样的操作:关闭包含那个按钮的窗口或者视图组件。

我们只能异步捕捉被发出的事件,使得我们可以在发出一个值事件时执行一个函数,发出错误事件时执行一个函数,发出完成事件时执行另一个函数。有时候你可以忽略后两个事件,只需聚焦于如何定义和设计在发出值事件时要执行的函数,监听这个事件流的过程叫做订阅,我们定义的函数叫做观察者,而事件流就可以叫做被观察的主题(或者叫被观察者)。你应该察觉到了,对的,它就是观察者模式

上面的示意图我们也可以用ASCII码的形式重新画一遍,请注意,下面的部分教程中我们会继续使用这幅图:

--a---b-c---d---X---|->

a, b, c, d 是值事件
X 是错误事件
| 是完成事件
---> 是时间线(轴)

现在你对响应式编程事件流应该非常熟悉了,为了不让你感到无聊,让我们来做一些新的尝试吧:我们将创建一个由原始点击事件流演变而来的一种新的点击事件流。

首先,让我们来创建一个记录按钮点击次数的事件流。在常用的响应式库中,每个事件流都会附有一些函数,例如 map,filter, scan等,当你调用这其中的一个方法时,比如clickStream.map(f),它会返回基于点击事件流的一个新事件流。它不会对原来的点击事件流做任何的修改。这种特性叫做不可变性(immutability),而且它可以和响应式事件流搭配在一起使用,就像豆浆和油条一样完美的搭配。这样我们可以用链式函数的方式来调用,例如:clickStream.map(f).scan(g):

  clickStream: ---c----c--c----c------c-->
               vvvvv map(c becomes 1) vvvv
               ---1----1--1----1------1-->
               vvvvvvvvv scan(+) vvvvvvvvv
counterStream: ---1----2--3----4------5-->

map(f)函数会根据你提供的f函数把原事件流中每一个返回值分别映射到新的事件流中。在上图的例子中,我们把每一次点击事件都映射成数字1,scan(g)函数则把之前映射的值聚集起来,然后根据x = g(accumulated, current)算法来作相应的处理,而本例的g函数其实就是简单的加法函数。然后,当一个点击事件发生时,counterStream函数则上报当前点击事件总数。

为了展示响应式编程真正的魅力,我们假设你有一个”双击”事件流,为了让它更有趣,我们假设这个事件流同时处理”三次点击”或者”多次点击”事件,然后深吸一口气想想如何用传统的命令式和状态式的方式来处理,我敢打赌,这么做会相当的讨厌,其中还要涉及到一些变量来保存状态,并且还得做一些时间间隔的调整。

而用响应式编程的方式处理会非常的简洁,实际上,逻辑处理部分只需要四行代码。但是,当前阶段让我们现忽略代码的部分,无论你是新手还是专家,看着图表思考来理解和建立事件流将是一个非常棒的方法。

多次点击事件流

图中,灰色盒子表示将上面的事件流转换下面的事件流的函数过程,首先根据250毫秒的间隔时间(event silence, 译者注:无事件发生的时间段,上一个事件发生到下一个事件发生的间隔时间)把点击事件流一段一隔开,再将每一段的一个或多个点击事件添加到列表中(这就是这个函数:buffer(stream.throttle(250ms))所做的事情,当前我们先不要急着去理解细节,我们只需专注响应式的部分先)。现在我们得到的是多个含有事件流的列表,然后我们使用了map()中的函数来算出每一个列表长度的整数数值映射到下一个事件流当中。最后我们使用了过滤filter(x >= 2) 函数忽略掉了小于1 的整数。就这样,我们用了3步操作生成了我们想要的事件流,接下来,我们就可以订阅(“监听”)这个事件并作出我们想要的操作了。

我希望你能感受到这个示例的优雅之处。当然了,这个示例也只是响应式编程魔力的冰山一角而已,你同样可以将这3步操作应用到不同种类的事件流中去,例如,一串API响应的事件流。另一方面,你还有非常多的函数可以使用。

“我为什么要采用响应式编程?”

响应式编程可以加深你代码抽象的程度,让你可以更专注于定义与事件相互依赖的业务逻辑,而不是把大量精力放在实现细节上,同时,使用响应式编程还能让你的代码变得更加简洁。

特别对于现在流行的webapps和mobile apps,它们的 UI 事件与数据频繁地产生交互,在开发这些应用时使用响应式编程的优点将更加明显。十年前,web页面的交互是通过提交一个很长的表单数据到后端,然后再做一些简单的前端渲染操作。而现在的Apps则演变的更具有实时性:仅仅修改一个单独的表单域就能自动的触发保存到后端的代码,就像某个用户对一些内容点了赞,就能够实时反映到其他已连接的用户一样,等等。

当今的Apps都含有丰富的实时事件来保证一个高效的用户体验,我们就需要采用一个合适的工具来处理,那么响应式编程就正好是我们想要的答案。

以响应式编程方式思考的例子

让我们深入到一些真实的例子,一个能够一步一步教你如何以响应式编程的方式思考的例子,没有虚构的示例,没有一知半解的概念。在这个教程的末尾我们将产生一些真实的函数代码,并能够知晓每一步为什么那样做的原因(知其然,知其所以然)。

我选了JavaScriptRxJS来作为本教程的编程语言,原因是:JavaScript是目前最多人熟悉的语言,而Rx系列的库对于很多语言和平台的运用是非常广泛的,例如(.NET, Java, Scala, Clojure, JavaScript, Ruby, Python, C++, Objective-C/Cocoa,Groovy等等。所以,无论你用的是什么语言、库、工具,你都能从下面这个教程中学到东西(从中受益)。

实现一个推荐关注(Who to follow)的功能

在Twitter里有一个UI元素向你推荐你可以关注的用户,如下图:

Twitter Who to follow suggestions box

我们将聚焦于模仿它的主要功能,它们是:

  • 开始阶段,从API加载推荐关注的用户账户数据,然后显示三个推荐用户
  • 点击刷新,加载另外三个推荐用户到当前的三行中显示
  • 点击每一行的推荐用户上的’x’按钮,清楚当前被点击的用户,并显示新的一个用户到当前行
  • 每一行显示一个用户的头像并且在点击之后可以链接到他们的主页。

我们可以先不管其他的功能和按钮,因为它们是次要的。因为Twitter最近关闭了未经授权的公共API调用,我们将用Github获取用户的API代替,并且以此来构建我们的UI。

如果你想先看一下最终效果,这里有完成后的代码

Request和Response

在Rx中是怎么处理这个问题呢?,在开始之前,我们要明白,(几乎)一切都可以成为一个事件流,这就是Rx的准则(mantra)。让我们从最简单的功能开始:”开始阶段,从API加载推荐关注的用户账户数据,然后显示三个推荐用户”。其实这个功能没什么特殊的,简单的步骤分为: (1)发出一个请求,(2)获取响应数据,(3)渲染响应数据。ok,让我们把请求作为一个事件流,一开始你可能会觉得这样做有些夸张,但别急,我们也得从最基本的开始,不是吗?

开始时我们只需做一次请求,如果我们把它作为一个数据流的话,它只能成为一个仅仅返回一个值的事件流而已。一会儿我们还会有很多请求要做,但当前,只有一个。

--a------|->

a就是字符串:'https://api.github.com/users'

这是一个我们要请求的URL事件流。每当发生一个请求时,它将告诉我们两件事:什么时候做了什么事(when and what)。什么时候请求被执行,什么时候事件就被发出。而做了什么就是请求了什么,也就是请求的URL字符串。

在Rx中,创建返回一个值的事件流是非常简单的。其实事件流在Rx里的术语是叫”被观察者”,也就是说它是可以被观察的,但是我发现这名字比较傻,所以我更喜欢把它叫做事件流

var requestStream = Rx.Observable.just('https://api.github.com/users');

但现在,这只是一个字符串的事件流而已,并没有做其他操作,所以我们需要在发出这个值的时候做一些我们要做的操作,可以通过订阅(subscribing)这个事件来实现。

requestStream.subscribe(function(requestUrl) {
  // execute the request
  jQuery.getJSON(requestUrl, function(responseData) {
    // ...
  });
}

注意到我们这里使用的是JQuery的AJAX回调方法(我们假设你已经很了解JQuery和AJAX了)来的处理这个异步的请求操作。但是,请稍等一下,Rx就是用来处理异步数据流的,难道它就不能处理来自请求(request)在未来某个时间响应(response)的数据流吗?好吧,理论上是可以的,让我们尝试一下。

requestStream.subscribe(function(requestUrl) {
  // execute the request
  var responseStream = Rx.Observable.create(function (observer) {
    jQuery.getJSON(requestUrl)
    .done(function(response) { observer.onNext(response); })
    .fail(function(jqXHR, status, error) { observer.onError(error); })
    .always(function() { observer.onCompleted(); });
  });

  responseStream.subscribe(function(response) {
    // do something with the response
  });
}

Rx.Observable.create()操作就是在创建自己定制的事件流,且对于数据事件(onNext())和错误事件(onError())都会显示的通知该事件每一个观察者(或订阅者)。我们做的只是小小的封装一下jQuery Ajax Promise而已。等等,这是否意味者jQuery Ajax Promise本质上就是一个被观察者呢(Observable)?

Amazed

是的。

Promise++就是被观察者(Observable),在Rx里你可以使用这样的操作:var stream = Rx.Observable.fromPromise(promise),就可以很轻松的将Promise转换成一个被观察者(Observable),非常简单的操作就能让我们现在就开始使用它。不同的是,这些被观察者都不能兼容Promises/A+,但理论上并不冲突。一个Promise就是一个只有一个返回值的简单的被观察者,而Rx就远超于Promise,它允许多个值返回。

这样更好,这样更突出被观察者至少比Promise强大,所以如果你相信Promise宣传的东西,那么也请留意一下响应式编程能胜任些什么。

现在回到示例当中,你应该能快速发现,我们在subscribe()方法的内部再次调用了subscribe()方法,这有点类似于回调地狱(callback hell),而且responseStream的创建也是依赖于requestStream的。在之前我们说过,在Rx里,有很多很简单的机制来从其他事件流的转化并创建出一些新的事件流,那么,我们也应该这样做试试。

现在你需要了解的一个最基本的函数是map(f),它可以从事件流A中取出每一个值,并对每一个值执行f()函数,然后将产生的新值填充到事件流B。如果将它应用到我们的请求和响应事件流当中,那我们就可以将请求的URL映射到一个响应Promises上了(伪装成数据流)。

var responseMetastream = requestStream
  .map(function(requestUrl) {
    return Rx.Observable.fromPromise(jQuery.getJSON(requestUrl));
  });

然后,我们创造了一个叫做”metastream“的怪兽:一个装载了事件流的事件流。先别惊慌,metastream就是每一个发出的值都是另一个事件流的事件流,你看把它想象成一个[指针(pointers)]((https://en.wikipedia.org/wiki/Pointer_(computer_programming))数组:每一个单独发出的值就是一个_指针_,它指向另一个事件流。在我们的示例里,每一个请求URL都映射到一个指向包含响应数据的promise数据流。

Response metastream

一个响应的metastream,看起来确实让人容易困惑,看样子对我们一点帮助也没有。我们只想要一个简单的响应数据流,每一个发出的值是一个简单的JSON对象就行,而不是一个’Promise’ 的JSON对象。ok,让我们来见识一下另一个函数:Flatmap,它是map()函数的另一个版本,它比metastream更扁平。一切在”主躯干”事件流发出的事件都将在”分支”事件流中发出。Flatmap并不是metastreams的修复版,metastreams也不是一个bug。它俩在Rx中都是处理异步响应事件的好工具、好帮手。

var responseStream = requestStream
  .flatMap(function(requestUrl) {
    return Rx.Observable.fromPromise(jQuery.getJSON(requestUrl));
  });

Response stream

很赞,因为我们的响应事件流是根据请求事件流定义的,如果我们以后有更多事件发生在请求事件流的话,我们也将会在相应的响应事件流收到响应事件,就如所期待的那样:

requestStream:  --a-----b--c------------|->
responseStream: -----A--------B-----C---|->

(小写的是请求事件流, 大写的是响应事件流)

现在,我们终于有响应的事件流了,并且可以用我们收到的数据来渲染了:

responseStream.subscribe(function(response) {
  // render `response` to the DOM however you wish
});

让我们把所有代码合起来,看一下:

var requestStream = Rx.Observable.just('https://api.github.com/users');

var responseStream = requestStream
  .flatMap(function(requestUrl) {
    return Rx.Observable.fromPromise(jQuery.getJSON(requestUrl));
  });

responseStream.subscribe(function(response) {
  // render `response` to the DOM however you wish
});

刷新按钮

我还没提到本次响应的JSON数据是含有100个用户数据的list,这个API只允许指定页面偏移量(page offset),而不能指定每页大小(page size),我们只用到了3个用户数据而浪费了其他97个,现在可以先忽略这个问题,稍后我们将学习如何缓存响应的数据。

每当刷新按钮被点击,请求事件流就会发出一个新的URL值,这样我们就可以获取新的响应数据。这里我们需要两个东西:点击刷新按钮的事件流(准则:一切都能作为事件流),我们需要将点击刷新按钮的事件流作为请求事件流的依赖(即点击刷新事件流会引起请求事件流)。幸运的是,RxJS已经有了可以从事件监听者转换成被观察者的方法了。

var refreshButton = document.querySelector('.refresh');
var refreshClickStream = Rx.Observable.fromEvent(refreshButton, 'click');

因为刷新按钮点击事件不会携带将要请求的API的URL,我们需要将每次的点击映射到一个实际的URL上,现在我们将请求事件流转换成了一个点击事件流,并将每次的点击映射成一个随机的页面偏移量(offset)参数来组成API的URL。

var requestStream = refreshClickStream
  .map(function() {
    var randomOffset = Math.floor(Math.random()*500);
    return 'https://api.github.com/users?since=' + randomOffset;
  });

因为我比较笨而且也没有使用自动化测试,所以我刚把之前做好的一个功能搞烂了。这样,请求在一开始的时候就不会执行,而只有在点击事件发生时才会执行。我们需要的是两种情况都要执行:刚开始打开网页和点击刷新按钮都会执行的请求。

我们知道如何为每一种情况做一个单独的事件流:

var requestOnRefreshStream = refreshClickStream
  .map(function() {
    var randomOffset = Math.floor(Math.random()*500);
    return 'https://api.github.com/users?since=' + randomOffset;
  });

var startupRequestStream = Rx.Observable.just('https://api.github.com/users');

但是我们是否可以将这两个合并成一个呢?没错,是可以的,我们可以使用merge()方法来实现。下图可以解释merge()函数的用处:

stream A: ---a--------e-----o----->
stream B: -----B---C-----D-------->
          vvvvvvvvv merge vvvvvvvvv
          ---a-B---C--e--D--o----->

现在做起来应该很简单:

var requestOnRefreshStream = refreshClickStream
  .map(function() {
    var randomOffset = Math.floor(Math.random()*500);
    return 'https://api.github.com/users?since=' + randomOffset;
  });

var startupRequestStream = Rx.Observable.just('https://api.github.com/users');

var requestStream = Rx.Observable.merge(
  requestOnRefreshStream, startupRequestStream
);

还有一个更干净的写法,省去了中间事件流变量:

var requestStream = refreshClickStream
  .map(function() {
    var randomOffset = Math.floor(Math.random()*500);
    return 'https://api.github.com/users?since=' + randomOffset;
  })
  .merge(Rx.Observable.just('https://api.github.com/users'));

甚至可以更简短,更具有可读性:

var requestStream = refreshClickStream
  .map(function() {
    var randomOffset = Math.floor(Math.random()*500);
    return 'https://api.github.com/users?since=' + randomOffset;
  })
  .startWith('https://api.github.com/users');

startWith()函数做的事和你预期的完全一样。无论你的输入事件流是怎样的,使用startWith(x)函数处理过后输出的事件流一定是一个x 开头的结果。但是我没有总是重复代码( DRY),我只是在重复API的URL字符串,改进的方法是将startWith()函数挪到refreshClickStream那里,这样就可以在启动时,模拟一个刷新按钮的点击事件了。

var requestStream = refreshClickStream.startWith('startup click')
  .map(function() {
    var randomOffset = Math.floor(Math.random()*500);
    return 'https://api.github.com/users?since=' + randomOffset;
  });

不错,如果你倒回到”搞烂了的自动测试”的地方,然后再对比这两个地方,你会发现我仅仅是加了一个startWith()函数而已。

用事件流将3个推荐的用户数据模型化

直到现在,在响应事件流(responseStream)的订阅(subscribe())函数发生的渲染步骤里,我们只是稍微提及了一下推荐关注的UI。现在有了刷新按钮,我们就会出现一个问题:当你点击了刷新按钮,当前的三个推荐关注用户没有被清楚,而只要响应的数据达到后我们就拿到了新的推荐关注的用户数据,为了让UI看起来更漂亮,我们需要在点击刷新按钮的事件发生的时候清楚当前的三个推荐关注的用户。

refreshClickStream.subscribe(function() {
  // clear the 3 suggestion DOM elements 
});

不,老兄,还没那么快。我们又出现了新的问题,因为我们现在有两个订阅者在影响着推荐关注的UI DOM元素(另一个是responseStream.subscribe()),这看起来并不符合关注分离(Separation of concerns)原则,还记得响应式编程的原则么?

Mantra

现在,让我们把推荐关注的用户数据模型化成事件流形式,每个被发出的值是一个包含了推荐关注用户数据的JSON对象。我们将把这三个用户数据分开处理,下面是推荐关注的1号用户数据的事件流:

var suggestion1Stream = responseStream
  .map(function(listUsers) {
    // get one random user from the list
    return listUsers[Math.floor(Math.random()*listUsers.length)];
  });

其他的,如推荐关注的2号用户数据的事件流suggestion2Stream和推荐关注的3号用户数据的事件流suggestion3Stream 都可以方便的从suggestion1Stream 复制粘贴就好。这里并不是重复代码,只是为让我们的示例更加简单,而且我认为这是一个思考如何避免重复代码的好案例。

Instead of having the rendering happen in responseStream’s subscribe(), we do that here:

suggestion1Stream.subscribe(function(suggestion) {
  // render the 1st suggestion to the DOM
});

我们不在responseStream的subscribe()中处理渲染了,我们这样处理:

suggestion1Stream.subscribe(function(suggestion) {
  // render the 1st suggestion to the DOM
});

回到”当刷新时,清楚掉当前的推荐关注的用户”,我们可以很简单的把刷新点击映射为没有推荐数据(null suggestion data),并且在suggestion1Stream中包含进来,如下:

var suggestion1Stream = responseStream
  .map(function(listUsers) {
    // get one random user from the list
    return listUsers[Math.floor(Math.random()*listUsers.length)];
  })
  .merge(
    refreshClickStream.map(function(){ return null; })
  );

当渲染时,我们将 null解释为”没有数据”,然后把UI元素隐藏起来。

suggestion1Stream.subscribe(function(suggestion) {
  if (suggestion === null) {
    // hide the first suggestion DOM element
  }
  else {
    // show the first suggestion DOM element
    // and render the data
  }
});

现在我们大概的示意图如下:

refreshClickStream: ----------o--------o---->
     requestStream: -r--------r--------r---->
    responseStream: ----R---------R------R-->
 suggestion1Stream: ----s-----N---s----N-s-->
 suggestion2Stream: ----q-----N---q----N-q-->
 suggestion3Stream: ----t-----N---t----N-t-->

N代表null

作为一种补充,我们可以在一开始的时候就渲染空的推荐内容。这通过把startWith(null)添加到推荐关注的事件流就可以了:

var suggestion1Stream = responseStream
  .map(function(listUsers) {
    // get one random user from the list
    return listUsers[Math.floor(Math.random()*listUsers.length)];
  })
  .merge(
    refreshClickStream.map(function(){ return null; })
  )
  .startWith(null);

结果是这样的:

refreshClickStream: ----------o---------o---->
     requestStream: -r--------r---------r---->
    responseStream: ----R----------R------R-->
 suggestion1Stream: -N--s-----N----s----N-s-->
 suggestion2Stream: -N--q-----N----q----N-q-->
 suggestion3Stream: -N--t-----N----t----N-t-->

推荐关注的关闭和使用已缓存的响应数据(responses)

只剩这一个功能没有实现了,每个推荐关注的用户UI会有一个’x’按钮来关闭自己,然后在当前的用户数据UI中加载另一个推荐关注的用户。最初的想法是:点击任何关闭按钮时都需要发起一个新的请求:

var close1Button = document.querySelector('.close1');
var close1ClickStream = Rx.Observable.fromEvent(close1Button, 'click');
// and the same for close2Button and close3Button

var requestStream = refreshClickStream.startWith('startup click')
  .merge(close1ClickStream) // we added this
  .map(function() {
    var randomOffset = Math.floor(Math.random()*500);
    return 'https://api.github.com/users?since=' + randomOffset;
  });

这样没什么效果,这样会关闭和重新加载全部的推荐关注用户,而不仅仅是处理我们点击的那一个。这里有几种方式来解决这个问题,并且让它变得有趣,我们将重用之前的请求数据来解决这个问题。这个API响应的每页数据大小是100个用户数据,而我们只使用了其中三个,所以还有一大堆未使用的数据可以拿来用,不用去请求更多数据了。

ok,再来,我们继续用事件流的方式来思考。当’close1’点击事件发生时,我们想要使用最近发出的响应数据,并执行responseStream函数来从响应列表里随机的抽出一个用户数据来,就像下面这样:

    requestStream: --r--------------->
   responseStream: ------R----------->
close1ClickStream: ------------c----->
suggestion1Stream: ------s-----s----->

在Rx中一个组合函数叫做combineLatest,应该是我们需要的。这个函数会把数据流A和数据流B作为输入,并且无论哪一个数据流发出一个值了,combineLatest 函数就会将从两个数据流最近发出的值ab作为f函数的输入,计算后返回一个输出值(c = f(x,y)),下面的图表会让这个函数的过程看起来会更加清晰:

stream A: --a-----------e--------i-------->
stream B: -----b----c--------d-------q---->
          vvvvvvvv combineLatest(f) vvvvvvv
          ----AB---AC--EC---ED--ID--IQ---->

f是转换成大写的函数

这样,我们就可以把combineLatest()函数用在close1ClickStreamresponseStream上了,只要关闭按钮被点击,我们就可以获得最近的响应数据,并在suggestion1Stream上产生出一个新值。另一方面,combineLatest()函数也是相对的:每当在responseStream上发出一个新的响应,它将会结合一次新的点击关闭按钮事件来产生一个新的推荐关注的用户数据,这非常有趣,因为它可以给我们的suggestion1Stream简化代码:

var suggestion1Stream = close1ClickStream
  .combineLatest(responseStream,
    function(click, listUsers) {
      return listUsers[Math.floor(Math.random()*listUsers.length)];
    }
  )
  .merge(
    refreshClickStream.map(function(){ return null; })
  )
  .startWith(null);

现在,我们的拼图还缺一小块地方。combineLatest()函数使用了最近的两个数据源,但是如果某一个数据源还没有发出任何东西,combineLatest()函数就不能在输出流上产生一个数据事件。如果你看了上面的ASCII图表(文章中第一个图表),你会明白当第一个数据流发出一个值a时并没有任何的输出,只有当第二个数据流发出一个值b的时候才会产生一个输出值。

这里有很多种方法来解决这个问题,我们使用最简单的一种,也就是在启动的时候模拟’close 1’的点击事件:

var suggestion1Stream = close1ClickStream.startWith('startup click') // we added this
  .combineLatest(responseStream,
    function(click, listUsers) {l
      return listUsers[Math.floor(Math.random()*listUsers.length)];
    }
  )
  .merge(
    refreshClickStream.map(function(){ return null; })
  )
  .startWith(null);

封装起来

我们完成了,下面是封装好的完整示例代码:

var refreshButton = document.querySelector('.refresh');
var refreshClickStream = Rx.Observable.fromEvent(refreshButton, 'click');

var closeButton1 = document.querySelector('.close1');
var close1ClickStream = Rx.Observable.fromEvent(closeButton1, 'click');
// and the same logic for close2 and close3

var requestStream = refreshClickStream.startWith('startup click')
  .map(function() {
    var randomOffset = Math.floor(Math.random()*500);
    return 'https://api.github.com/users?since=' + randomOffset;
  });

var responseStream = requestStream
  .flatMap(function (requestUrl) {
    return Rx.Observable.fromPromise($.ajax({url: requestUrl}));
  });

var suggestion1Stream = close1ClickStream.startWith('startup click')
  .combineLatest(responseStream,
    function(click, listUsers) {
      return listUsers[Math.floor(Math.random()*listUsers.length)];
    }
  )
  .merge(
    refreshClickStream.map(function(){ return null; })
  )
  .startWith(null);
// and the same logic for suggestion2Stream and suggestion3Stream

suggestion1Stream.subscribe(function(suggestion) {
  if (suggestion === null) {
    // hide the first suggestion DOM element
  }
  else {
    // show the first suggestion DOM element
    // and render the data
  }
});

你可以在这里看到可演示的示例工程

以上的代码片段虽小但做到很多事:它适当的使用关注分离(separation of concerns)原则的实现了对多个事件流的管理,甚至做到了响应数据的缓存。这种函数式的风格使得代码看起来更像是声明式编程而非命令式编程:我们并不是在给一组指令去执行,只是定义了事件流之间关系来告诉它这是什么。例如,我们用Rx来告诉计算机suggestion1Stream‘close 1’事件结合从最新的响应数据中拿到的一个用户数据的数据流,除此之外,当刷新事件发生时和程序启动时,它就是null

留意一下代码中并未出现例如if, for, while等流程控制语句,或者像JavaScript那样典型的基于回调(callback-based)的流程控制。如果可以的话(稍候会给你留一些实现细节来作为练习),你甚至可以在subscribe()上使用 filter()函数来摆脱ifelse。在Rx里,我们有例如: map, filter, scan, merge, combineLatest, startWith等数据流的函数,还有很多函数可以用来控制事件驱动编程(event-driven program)的流程。这些函数的集合可以让你使用更少的代码实现更强大的功能。

接下来

如果你认为Rx将会成为你首选的响应式编程库,接下来就需要花一些时间来熟悉一大批的函数用来变形、联合和创建被观察者。如果你想在事件流的图表当中熟悉这些函数,那就来看一下这个:RxJava’s very useful documentation with marble diagrams。请记住,无论何时你遇到问题,可以画一下这些图,思考一下,看一看这一大串函数,然后继续思考。以我个人经验,这样效果很有效。

一旦你开始使用了Rx编程,请记住,理解Cold vs Hot Observables的概念是非常必要的,如果你忽视了这一点,它就会反弹回来并残忍的反咬你一口。我这里已经警告你了,学习函数式编程可以提高你的技能,熟悉一些常见问题,例如Rx会带来的副作用

但是响应式编程库并不仅仅是Rx,还有相对容易理解的,没有Rx那些怪癖的Bacon.jsElm Language则以它自己的方式支持响应式编程:它是一门会编译成Javascript + HTML + CSS的响应式编程语言,并有一个time travelling debugger功能,很棒吧。

而Rx对于像前端和App这样需要处理大量的编程效果是非常棒的。但是它不只是可以用在客户端,还可以用在后端或者接近数据库的地方。事实上,RxJava就是Netflix服务端API用来处理并行的组件。Rx并不是局限于某种应用程序或者编程语言的框架,它真的是你编写任何事件驱动程序,可以遵循的一个非常棒的编程范式。

如果这篇教程对你有帮助, 那么就请来转发一下吧(tweet it forward).


轻松入门React和Webpack

$
0
0

最近在学习React.js,之前都是直接用最原生的方式去写React代码,发现组织起来特别麻烦,之前听人说用Webpack组织React组件得心应手,就花了点时间学习了一下,收获颇丰

说说React

一个组件,有自己的结构,有自己的逻辑,有自己的样式,会依赖一些资源,会依赖某些其他组件。比如日常写一个组件,比较常规的方式:

- 通过前端模板引擎定义结构
- JS文件中写自己的逻辑
- CSS中写组件的样式
- 通过RequireJS、SeaJS这样的库来解决模块之间的相互依赖,
那么在React中是什么样子呢?

结构和逻辑

在React的世界里,结构和逻辑交由JSX文件组织,React将模板内嵌到逻辑内部,实现了一个JS代码和HTML混合的JSX。

结构

在JSX文件中,可以直接通过React.createClass来定义组件:

1
2
3
4
5
var CustomComponent = React.creatClass({
    render: function(){
        return (<div className="custom-component"></div>);
    }
});

通过这种方式可以很方便的定义一个组件,组件的结构定义在render函数中,但这并不是简单的模板引擎,我们可以通过js方便、直观的操控组件结构,比如我想给组件增加几个节点:

1
2
3
4
5
6
7
8
var CustomComponent = React.creatClass({
    render: function(){
        var $nodes = ['h','e','l','l','o'].map(function(str){
            return (<span>{str}</span>);
        });
        return (<div className="custom-component">{$nodes}</div>);
    }
});

通过这种方式,React使得组件拥有灵活的结构。那么React又是如何处理逻辑的呢?

逻辑

写过前端组件的人都知道,组件通常首先需要相应自身DOM事件,做一些处理。必要时候还需要暴露一些外部接口,那么React组件要怎么做到这两点呢?

事件响应

比如我有个按钮组件,点击之后需要做一些处理逻辑,那么React组件大致上长这样:

1
2
3
4
5
var ButtonComponent = React.createClass({
    render: function(){
        return (<button>屠龙宝刀,点击就送</button>);
    }
});

点击按钮应当触发相应地逻辑,一种比较直观的方式就是给button绑定一个onclick事件,里面就是需要执行的逻辑了:

1
2
3
4
5
6
7
8
function getDragonKillingSword() {
    //送宝刀
}
var ButtonComponent = React.createClass({
    render: function(){
        return (<button onclick="getDragonKillingSword()">屠龙宝刀,点击就送</button>);
    }
});

但事实上getDragonKillingSword()的逻辑属于组件内部行为,显然应当包装在组件内部,于是在React中就可以这么写:

1
2
3
4
5
6
7
8
var ButtonComponent = React.createClass({
    getDragonKillingSword: function(){
        //送宝刀
    },
    render: function(){
        return (<button onClick={this.getDragonKillingSword}>屠龙宝刀,点击就送</button>);
    }
});

这样就实现内部事件的响应了,那如果需要暴露接口怎么办呢?

暴露接口

事实上现在getDragonKillingSword已经是一个接口了,如果有一个父组件,想要调用这个接口怎么办呢?

父组件大概长这样:

1
2
3
4
5
6
7
8
9
10
11
var ImDaddyComponent = React.createClass({
    render: function(){
        return (
            <div>
                //其他组件
                <ButtonComponent />
                //其他组件
            </div>
        );
    }
});

那么如果想手动调用组件的方法,首先在ButtonComponent上设置一个ref=""属性来标记一下,比如这里把子组件设置成<ButtonComponent ref="getSwordButton"/>,那么在父组件的逻辑里,就可以在父组件自己的方法中通过这种方式来调用接口方法:

1
this.refs.getSwordButton.getDragonKillingSword();

看起来屌屌哒~那么问题又来了,父组件希望自己能够按钮点击时调用的方法,那该怎么办呢?

配置参数

父组件可以直接将需要执行的函数传递给子组件:

1
<ButtonComponent clickCallback={this.getSwordButtonClickCallback}/>

然后在子组件中调用父组件方法:

1
2
3
4
5
var ButtonComponent = React.createClass({
    render: function(){
        return (<button onClick={this.props.clickCallback}>屠龙宝刀,点击就送</button>);
    }
});

子组件通过this.props能够获取在父组件创建子组件时传入的任何参数,因此this.props也常被当做配置参数来使用

屠龙宝刀每个人只能领取一把,按钮点击一下就应该灰掉,应当在子组件中增加一个是否点击过的状态,这又应当处理呢?

组件状态

在React中,每个组件都有自己的状态,可以在自身的方法中通过this.state取到,而初始状态则通过getInitialState()方法来定义,比如这个屠龙宝刀按钮组件,它的初始状态应该是没有点击过,所以getInitialState方法里面应当定义初始状态clicked: false。而在点击执行的方法中,应当修改这个状态值为click: true

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
var ButtonComponent = React.createClass({
    getInitialState: function(){
        //确定初始状态
        return {
            clicked: false
        };
    },
    getDragonKillingSword: function(){
        //送宝刀

        //修改点击状态
        this.setState({
            clicked: true
        });
    },
    render: function(){
        return (<button onClick={this.getDragonKillingSword}>屠龙宝刀,点击就送</button>);
    }
});

这样点击状态的维护就完成了,那么render函数中也应当根据状态来维护节点的样式,比如这里将按钮设置为disabled,那么render函数就要添加相应的判断逻辑:

1
2
3
4
5
6
7
render: function(){
    var clicked = this.state.clicked;
    if(clicked)
        return (<button disabled="disabled" onClick={this.getDragonKillingSword}>屠龙宝刀,点击就送</button>);
    else
        return (<button onClick={this.getDragonKillingSword}>屠龙宝刀,点击就送</button>);
}

小节

这里简单介绍了通过JSX来管理组件的结构和逻辑,事实上React给组件还定义了很多方法,以及组件自身的生命周期,这些都使得组件的逻辑处理更加强大

资源加载

CSS文件定义了组件的样式,现在的模块加载器通常都能够加载CSS文件,如果不能一般也提供了相应的插件。事实上CSS、图片可以看做是一种资源,因为加载过来后一般不需要做什么处理。

React对这一方面并没有做特别的处理,虽然它提供了Inline Style的方式把CSS写在JSX里面,但估计没有多少人会去尝试,毕竟现在CSS样式已经不再只是简单的CSS文件了,通常都会去用Less、Sass等预处理,然后再用像postcss、myth、autoprefixer、cssmin等等后处理。资源加载一般也就简单粗暴地使用模块加载器完成了

组件依赖

组件依赖的处理一般分为两个部分:组件加载和组件使用

组件加载

React没有提供相关的组件加载方法,依旧需要通过<script>标签引入,或者使用模块加载器加载组件的JSX和资源文件。

组件使用

如果细心,就会发现其实之前已经有使用的例子了,要想在一个组件中使用另外一个组件,比如在ParentComponent中使用ChildComponent,就只需要在ParentComponentrender()方法中写上<ChildComponent />就行了,必要的时候还可以传些参数。

疑问

到这里就会发现一个问题,React除了只处理了结构和逻辑,资源也不管,依赖也不管。是的,React将近两万行代码,连个模块加载器都没有提供,更与Angularjs,jQuery等不同的是,他还不带啥脚手架…没有Ajax库,没有Promise库,要啥啥没有…

虚拟DOM

那它为啥这么大?因为它实现了一个虚拟DOM(Virtual DOM)。虚拟DOM是干什么的?这就要从浏览器本身讲起

如我们所知,在浏览器渲染网页的过程中,加载到HTML文档后,会将文档解析并构建DOM树,然后将其与解析CSS生成的CSSOM树一起结合产生爱的结晶——RenderObject树,然后将RenderObject树渲染成页面(当然中间可能会有一些优化,比如RenderLayer树)。这些过程都存在与渲染引擎之中,渲染引擎在浏览器中是于JavaScript引擎(JavaScriptCore也好V8也好)分离开的,但为了方便JS操作DOM结构,渲染引擎会暴露一些接口供JavaScript调用。由于这两块相互分离,通信是需要付出代价的,因此JavaScript调用DOM提供的接口性能不咋地。各种性能优化的最佳实践也都在尽可能的减少DOM操作次数。

而虚拟DOM干了什么?它直接用JavaScript实现了DOM树(大致上)。组件的HTML结构并不会直接生成DOM,而是映射生成虚拟的JavaScript DOM结构,React又通过在这个虚拟DOM上实现了一个 diff 算法找出最小变更,再把这些变更写入实际的DOM中。这个虚拟DOM以JS结构的形式存在,计算性能会比较好,而且由于减少了实际DOM操作次数,性能会有较大提升

道理我都懂,可是为什么我们没有模块加载器?

所以就需要Webpack了

说说Webpack

什么是Webpack?

事实上它是一个打包工具,而不是像RequireJS或SeaJS这样的模块加载器,通过使用Webpack,能够像Node.js一样处理依赖关系,然后解析出模块之间的依赖,将代码打包

安装Webpack

首先得有Node.js

然后通过npm install -g webpack安装webpack,当然也可以通过gulp来处理webpack任务,如果使用gulp的话就npm install --save-dev gulp-webpack

配置Webpack

Webpack的构建过程需要一个配置文件,一个典型的配置文件大概就是这样

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
var webpack = require('webpack');
var commonsPlugin = new webpack.optimize.CommonsChunkPlugin('common.js');

module.exports = {
    entry: {
        entry1: './entry/entry1.js',
        entry2: './entry/entry2.js'
    },
    output: {
        path: __dirname,
        filename: '[name].entry.js'
    },
    resolve: {
        extensions: ['', '.js', '.jsx']
    },
    module: {
        loaders: [{
            test: /\.js$/,
            loader: 'babel-loader'
        }, {
            test: /\.jsx$/,
            loader: 'babel-loader!jsx-loader?harmony'
        }]
    },
    plugins: [commonsPlugin]
};

这里对Webpack的打包行为做了配置,主要分为几个部分:

  • entry:指定打包的入口文件,每有一个键值对,就是一个入口文件
  • output:配置打包结果,path定义了输出的文件夹,filename则定义了打包结果文件的名称,filename里面的[name]会由entry中的键(这里是entry1和entry2)替换
  • resolve:定义了解析模块路径时的配置,常用的就是extensions,可以用来指定模块的后缀,这样在引入模块时就不需要写后缀了,会自动补全
  • module:定义了对模块的处理逻辑,这里可以用loaders定义了一系列的加载器,以及一些正则。当需要加载的文件匹配test的正则时,就会调用后面的loader对文件进行处理,这正是webpack强大的原因。比如这里定义了凡是.js结尾的文件都是用babel-loader做处理,而.jsx结尾的文件会先经过jsx-loader处理,然后经过babel-loader处理。当然这些loader也需要通过npm install安装
  • plugins: 这里定义了需要使用的插件,比如commonsPlugin在打包多个入口文件时会提取出公用的部分,生成common.js

当然Webpack还有很多其他的配置,具体可以参照它的配置文档

执行打包

如果通过npm install -g webpack方式安装webpack的话,可以通过命令行直接执行打包命令,比如这样:

1
$webpack --config webpack.config.js

这样就会读取当前目录下的webpack.config.js作为配置文件执行打包操作

如果是通过gulp插件gulp-webpack,则可以在gulpfile中写上gulp任务:

1
2
3
4
5
6
7
8
9
var gulp = require('gulp');
var webpack = require('gulp-webpack');
var webpackConfig = require('./webpack.config');
gulp.task("webpack", function() {
    return gulp
        .src('./')
        .pipe(webpack(webpackConfig))
        .pipe(gulp.dest('./build'));
});

组件编写

使用Babel提升逼格

Webpack使得我们可以使用Node.js的CommonJS规范来编写模块,比如一个简单的Hello world模块,就可以这么处理:

1
2
3
4
5
6
7
8
9
10
var React = require('react');

var HelloWorldComponent = React.createClass({
    displayName: 'HelloWorldComponent',
    render: function() {
        return (<div>Hello world</div>);
    }
});

module.exports = HelloWorldComponent;

等等,这和之前的写法没啥差别啊,依旧没有逼格…程序员敲码要有geek范,要逼格than逼格,这太low了。现在都ES6了,React的代码也要写ES6,babel-loader就是干这个的。Babel能够将ES6代码转换成ES5。首先需要通过命令npm install --save-dev babel-loader来进行安装,安装完成后就可以使用了,一种使用方式是之前介绍的在webpack.config.js的loaders中配置,另一种是直接在代码中使用,比如:

1
var HelloWorldComponent = require('!babel!jsx!./HelloWorldComponent');

那我们应当如何使用Babel提升代码的逼格呢?改造一下之前的HelloWorld代码吧:

1
2
3
4
5
6
7
8
9
10
11
import React from 'react';

export default class HelloWorldComponent extends React.Component {
    constructor() {
        super();
        this.state = {};
    }
    render() {
        return (<div>Hello World</div>);
    }
}

这样在其他组件中需要引入HelloWorldComponent组件,就只要就可以了:

1
import HelloWorldComponent from './HelloWorldComponent'

怎么样是不是更有逼格了?通过import引入模块,还可以直接定义类和类的继承关系,这里也不再需要getInitialState了,直接在构造函数constructor中用this.state = xxx就好了

Babel带来的当然还不止这些,在其帮助下还能尝试很多优秀的ES6特性,比如箭头函数,箭头函数的特点就是内部的this和外部保持一致,从此可以和that_this说再见了

1
2
3
['H', 'e', 'l', 'l', 'o'].map((c) => {
    return (<span>{c}</span>);
});

其他还有很多,具体可以参照Babel的学习文档

样式编写

我是一个强烈地Less依赖患者,脱离了Less直接写CSS就会出现四肢乏力、不想干活、心情烦躁等现象,而且还不喜欢在写Less时候加前缀,平常都是gulp+less+autoprefixer直接处理的,那么在Webpack组织的React组件中要怎么写呢?

没错,依旧是使用loader

可以在webpack.config.js的loaders中增加Less的配置:

1
2
3
4
{
  test: /\.less$/,
  loader: 'style-loader!css-loader!autoprefixer-loader!less-loader'
}

通过这样的配置,就可以直接在模块代码中引入Less样式了:

1
2
3
4
5
6
7
8
9
10
11
12
13
import React from 'react';

require('./HelloWorldComponent.less');

export default class HelloWorldComponent extends React.Component {
    constructor() {
        super();
        this.state = {};
    }
    render() {
        return (<div>Hello World</div>);
    }
}

其他

Webpack的loader为React组件化提供了很多帮助,像图片也提供了相关的loader:

1
{ test: /\.png$/, loader: "url-loader?mimetype=image/png" }

更多地loader可以移步webpack的wiki

在Webpack下实时调试React组件

Webpack和React结合的另一个强大的地方就是,在修改了组件源码之后,不刷新页面就能把修改同步到页面上。这里需要用到两个库webpack-dev-serverreact-hot-loader

首先需要安装这两个库,npm install --save-dev webpack-dev-server react-hot-loader

安装完成后,就要开始配置了,首先需要修改entry配置:

1
2
3
4
5
6
7
entry: {
  helloworld: [
    'webpack-dev-server/client?http://localhost:3000',
    'webpack/hot/only-dev-server',
    './helloworld'
  ]
},

通过这种方式指定资源热启动对应的服务器,然后需要配置react-hot-loader到loaders的配置当中,比如我的所有组件代码全部放在scripts文件夹下:

1
2
3
4
5
{
  test: /\.js?$/,
  loaders: ['react-hot', 'babel'],
  include: [path.join(__dirname, 'scripts')]
}

最后配置一下plugins,加上热替换的插件和防止报错的插件:

1
2
3
4
plugins: [
  new webpack.HotModuleReplacementPlugin(),
  new webpack.NoErrorsPlugin()
]

这样配置就完成了,但是现在要调试需要启动一个服务器,而且之前配置里映射到http://localhost:3000,所以就在本地3000端口起个服务器吧,在项目根目录下面建个server.js:

1
2
3
4
5
6
7
8
9
10
11
12
var webpack = require('webpack');
var WebpackDevServer = require('webpack-dev-server');
var config = require('./webpack.config');

new WebpackDevServer(webpack(config), {
  publicPath: config.output.publicPath,
  hot: true,
  historyApiFallback: true
}).listen(3000, 'localhost', function (err, result) {
  if (err) console.log(err);
  console.log('Listening at localhost:3000');
});

这样就可以在本地3000端口开启调试服务器了,比如我的页面是根目录下地index.html,就可以直接通过http://localhost:3000/index.html访问页面,修改React组件后页面也会被同步修改,这里貌似使用了websocket来同步数据。图是一个简单的效果:

Alt text

结束

React的组件化开发很有想法,而Webpack使得React组件编写和管理更加方便,这里只涉及到了React和Webpack得很小一部分,还有更多的最佳实践有待在学习的路上不断发掘


Using JavaScript to Create Geospatial and Advanced Maps

$
0
0

Using JavaScript to Create Geospatial and Advanced Maps

Tweet
Geospatial Information Systems (GIS) is an area of cartography and information technology concerned with the storage, manipulation, analysis, and presentation of geographic and spatial data. You are probably most familiar with GIS services that produce dynamic, two-dimensional tile maps which have been prominent on the web since the days of MapQuest.

Until recently, developing geospatial apps beyond a 2D map required a comprehensive GIS service such as ArcGIS, Nokia Here, or Google Maps. While these APIs are powerful, they are also expensive, onerous to learn, and lock the map developer to a single solution. Fortunately, there are now a wealth of useful, open source JavaScript tools for handling advanced cartography and geospatial analysis.

In this article, I’ll examine how to implement GIS techniques with JavaScript and HTML, focusing on lightweight tools for specific tasks. Many of the tools I’ll cover are based on services such as Mapbox, CloudMade, and MapZen, but these are all modular libraries that can be added as packages to Node.js or used for analysis in a web browser.

Note: The CodePen examples embedded in this post are best viewed on CodePen directly.

Geometry & 3D

Distance and Measurement

It is especially useful to have small, focused libraries that perform distance measurement, and conversion operations, such as finding the area of a geo-fence or converting miles to kilometers. The following libraries work with GeoJSON formatted objects representing geographic space.

  • Geolib provides distance (and estimated time) calculations between two latitude-latitude coordinates. A handy feature of Geolib is order by distance, which sorts a list or array by distance. The library also supports elevation.
  • Turf.js, which is described in the next section, also provides a distance function to calculate the great-circle distance between two points. Additionally, Turf.js calculates area, distance along a path, and the midpoint between points.
  • Sylvester is a library for geometry, vector, and matrix math in JavaScript. This library is helpful when basic measurement of lines and planes is not enough.

3D

While the above libraries work well for 2D projections of geography, three-dimensional GIS is an exciting and expansive field—which is natural because we live 3D space. Fortunately, WebGL and the HTML5 canvas have also opened up new 3D techniques for web developers.

Here’s an example of how to display GeoJSON Features on a 3D object:

You can also check out Byron Houwen’s article on WebGL and JavaScript, which shows how to create a terrain map of earth with Three.js

Geo Features & Points

Much of the work in GIS involves dealing with points, shapes, symbols, and other features. The most basic task is to add a shape or point features to a map. The well-established Leaflet library and newcomer Turf.js make this much easier and allow users to work with feature collections.

  • Leaflet is simply the best option for working with the display of points, symbols, and all types of features on web and mobile devices. The library supports rectangles, circles, polygons, points, custom markers, and a wide variety of layers. It performs quickly and handles a variety of formats. The library also has a rich ecosystem of third-party plug-ins.
  • Turf.js is a library from Mapbox for geospatial analysis. One of the great things about Turf.js is that you can create a collection of features and then spatially analyze, modify (geoprocess), and simplify them, before using Leaflet to present the data. Like Geolib, Turf.js will calculate the path length, feature center, points inside a feature.
  • Simple Map D3 creates choropleths and other symbology by simply defining a GeoJSON object and data attribute.

The following is an example of using Turf.js to calculate the population density of all counties in the state of California and then displaying the results as a Leaflet choropleth map.

A key concept in Turf.js is a collection of geographic features, such as polygons. These feature are typically GeoJSON features that you want to analyze, manipulate, or display on a map. You start with a GeoJSON object with an array of county features. Then, create a collection from this object:

1
collection = turf.featurecollection(counties.features);

With this collection you can perform many useful operations. You can transform one or more collections with joins, intersections, interpolation, and exclusion. You can calculate descriptive statistics, classifications, and sample distributions.

In the case of population density, you can calculate the natural breaks (Jenks optimization) or quantile classifications for population density:

1
breaks = turf.jenks(collection, "pop_density", 8);

The population density (population divided by area) value was calculated and stored as a property of each county, but the operation works for any feature property.

Working with Points

Points are a special type of geographic feature representing a latitude-longitude coordinate (and associated data). These features are frequently used in web applications, e.g. to display a set of nearby businesses on a map.

Turf.js provides a number of different operations for points, including finding the centroid point in a feature and creating a rectangle or polygon that encompasses all points. You can also calculate statistics from points, such as the average based on a data value for each point.

There are also extensions for Leaflet.js that help when dealing with a large number of points:

  • Marker Cluster for Leaflet is great for visualizing the results from Turf, or a collection of points that is large. The library itself handles hundreds of points, but there are plugins like Marker Cluster and Mask Canvas for handling hundreds of thousands of points.
  • Heat for Leaflet creates a dynamic heat map from point data. It even works for datasets with thousands of points.

Geocoding & Routing

Routing, geocoding, and reverse geocoding locations requires an online service, such as Google or Nokia Here, but recent libraries have made the implementation easier. There are also suitable open source alternatives.

The HTML5 Geolocation API provides a simple method of getting a device’s GPS location (with user permission):

1
2
3
navigator.geolocation.getCurrentPosition(function(result){
  // do something with result.coords
);

Location-aware web applications can use Turf.js spatial analysis methods for advanced techniques such as geofencing a location inside or outside of a map feature. For instance, you can take the result from the above example and use the turf.inside method to see if that coordinate is within the boundaries of a given neighborhood.

  • GeoSeach is a Leaflet plugin for geocoding that allows the developer to choose between the ArcGIS, Google, and OpenStreetMaps geocoder. Once the control is added to the base map, it will automatically use the selected geocoding service to show the best search result on the map. The library is designed to be extensible to other third-party services.
  • Geo for Node.js is a geocoding library that uses Google’s Geocode API for geocoding and reverse geocoding. It additionally supports the Geohash system for URL encoding of latitude-longitude coordinates.

As with geocoding, there are myriad routing and direction services, but they will cost you. A reliable, open source alternative is the Open Source Routing Machine (OSRM) service by MapZen. It provides a free service for routing car, bicycle, and pedestrian directions. Transit Mix cleverly uses the OSRM Routing tool for creating routes in their transportation planning tool.

Spatial and Network Analysis

I’ve mentioned a few spatial analysis methods you can implement with Turf.js and other libraries, but I’ve only covered a small part of a vast world. I’ve created an example application that illustrates several of the techniques I’ve introduced.

Conclusion

In this article I hope to have provided a comprehensive overview of the tools which are available to perform geospatial analysis and geoprocessing with JavaScript. Are you using these libraries in your projects already? Did I miss any out? Let me know in the comments.

If you want to go even further with geospatial analysis and geoprocessing with JavaScript, here are a few more resources and utilities:

  • NetworkX and D3.jsMike Dewars’ book on D3.js includes a number of examples of using D3 with maps and spatial analysis. One of the more interesting examples is creating a directed graph of the New York Metro, which is done by analyzing the Google Transit specification for MTA with NetworkX.
  • Simply.js — Turf uses Vladimir Agafonkin’s Simply.js to perform shape simplification. That library can also be installed as an independent Node.js package for online or offline processing of files.
  • d3 Geo Exploder — Ben Southgate’s d3.geo.exploder allows you to transition geographic features (geoJSON) to another shape, like a grid or a scatter plot.
  • Shp — Use this library to convert a shapefile (and data files) to GeoJSON
  • ToGeoJSON — Use this library to convert KML & GPX to GeoJSON
  • Shp2stl – Use this library to convert geodata into 3D models that can be rendered or 3D printed.
  • MetaCRS and Proj4js— use these libraries to convert between coordinate systems.

Make a Mobile App with ReactJS in 30 Minutes

$
0
0

React is enabling frontend developers to build apps like never before. It’s benefits are many: one-way data flow, easy component lifecycle methods, declarative components and more.

Reapp was recently released on React. It’s a mobile app platform designed for performance and productivity. Think of it as a well-optimized UI kit, along with a build system and a bunch of helpers that let you build apps easily.

reapp

Reapp gives us some nice things out of the box:

  • A complete UI kit for mobile
  • “reapp new” to generate a working app
  • “reapp run” to serve our app with ES6 and hot reloading
  • Themes and animations
  • Routing and requests packages
  • Building our app to Cordova

What we’ll be building

To explore using Reapp we’re going to build an app that lets you search with the Flickr API and view the results in a photo gallery. This tutorial should take you less than half an hour to follow along with!

Starting out

With node installed, lets run sudo npm install -g reapp to install the Reapp CLI. Once that installs, run reapp new flickrapp. Finally, cd flickrapp and reapp run.

You should see this:

cli

Browse to localhost:3010 and you can see the default Reapp app:

first-run

Tip: With Chrome’s Developer Tools, enable mobile device emulation to view your app as a mobile app

devtools

Alright! Now we’re fully set up with a React stack using Reapp components. Lets check the file structure:

/app
  components/
    home/
      Sub.jsx
    App.jsx
    Home.jsx
  app.js
  routes.js
/assets

Reapp scaffolded us some demonstration stuff here, which is what you see in . app/components. The rest is just setting up our app. ./app/app.js is the entry to our app, it loads Reapp and runs our routes, which are found in ./app/routes.js.

Start Our View

We have our app generated, but Reapp generates us a full demo app showing nested views, and we won’t need much more than a single page. Lets simplify things. In routes.js we can swap it out to just look like this:

module.exports = ({ routes, route }) =>
  routes(require,
    route('app', '/', { dir: '' })
  );

This wires up the base route (at http://localhost:3010) to the name app, which Reapp’s router will automatically look for in ./components/App.jsx.

Now we can delete the Home.jsx and home/Sub.jsx files, since we don’t need multiple views. You can leave them be as well if you’d like to explore using them later.

In the App.jsx file, we can simplify it to:

import React from 'react';
import View from 'reapp-ui/views/View';

export default React.createClass({
  render() {
    var { photos } = this.state;

    return (
      <View title="Flickr Search" styles={{ inner: { padding: 20 } }}>
        <p>Hello World</p>
      </View>
    );
  }
});

If you refresh, you should see an empty view with your new title “Flickr Search” at top.

Fetch Data from Flickr

Now we have an interface with no logic. Before we can link together the Button to the display of photos, we need to grab the photos from Flickr using React conventions. First, get yourself a Flickr account and API key using their quick sign up form.

After filling it out (and signing up if necessary) copy the Public Key they give you and add it as a constant to App.jsx. You’ll also need the URL that’s used for searching for photos, which I found by using their [API explorer](https://www.flickr.com/services/api explore/flickr.photos.search).

It should look like this:

const key = '__YOUR_KEY_HERE__';
const base = 'https://api.flickr.com/services/rest/?api_key=${key}&format=rest&format=json&nojsoncallback=1';

Be sure to put your key instead of “YOUR_KEY_HERE“.

Note: const is a new feature in the next version of JavaScript, called ES6. It’s just like a variable, but one that can never be changed once it’s set. How can we use this in our app now? Reapp has a Webpack build system built in that gives you all sorts of features, including ES6 support!

Next, define getInitialState() on our React class, so our component can track the photos we’ll be fetching. We add this as the first property afterReact.createClass. Because we’re storing photos in a list, add an array:

getInitialState() {
  return {
    photos: []
  }
},

This will give us access to this.state.photos in our render function. In the UI we’ll need a Button and Input to use for searching:

import Button from 'reapp-ui/components/Button';
import Input from 'reapp-ui/components/Input';

And then change the render() function:

  render() {
    var { photos } = this.state;

    return (
      <View title="Flickr Search">
        <Input ref="search img-responsive" />
        <Button onTap={this.handleSearch}>Search Images</Button>

        <div className="verticalCenter">
          {!photos.length &&
            <p>No photos!</p>
          }
        </div>
      </View>
    );
  }

And we get this:

no-photos

Pretty easy! There’s a few things to note here. First, notice the ref property on the Input? Ref is short for reference, and lets us track DOM elements in our class. We’ll use that in the future for getting the value of the field.

Also, note className="verticalCenter" on the div. Two things: Because we’re using JSX that compiles to JS objects ([more reading here](http://facebook.github.io react/docs/jsx-in-depth.html)), we can’t use the normal class attribute, so instead use use the JavaScript convention of className to set the class. TheverticalCenter property is given to us by Reapp, that will align things centered on our page.

Finally, the onTap property on Button? It’s pointing to this.handleSearch. But, we don’t have any handleSearch function. React will expect that function defined on the class, so lets wire it up. First, npm install --save superagent which gives us the excellent Superagent request library. Then, import it:

import Superagent from 'superagent';

Finally, define handleSearch:

  handleSearch() {
    let searchText = this.refs.search.getDOMNode().value;
    Superagent
      .get(`${base}&method=flickr.photos.search&text=${searchText}&per_page=10&page=1`, res => {
        if (res.status === 200 && res.body.photos)
          this.setState({
            photos: res.body.photos.photo
          });
      });
  },

A few notes:

  • this.refs.search.getDOMNode() returns the input DOM node that we put the “search” ref on earlier.
  • ${base} will grab the URL we put in the constant.
  • this.setState will take our response photos and put them into thethis.state.photos array we defined earlier in getInitialState.

Displaying Flickr Photos

Now we’ve fetched our Flickr photos and put them into the state. The last step is to display them. You can add this to the first line of your render function to see what Flickr returns:

render() {
  console.log(this.state.photos);
  // ... rest of render
}

In your console you’ll see that Flickr returns an object with some properties. Onthis helpful page I found out how to render the URL’s for flickr.

Here’s how I landed on constructing the URL for a photo, which I put as a simple function on the class we’re building:

getFlickrPhotoUrl(image) {
  return `https://farm${image.farm}.staticflickr.com/${image.server}/${image.id}_${image.secret}.jpg`;
},

This function takes our Flickr object and turns them into the URL we need to display. Next, lets edit the handleSearch setState call:

this.setState({
  photos: res.body.photos.photo.map(this.getFlickrPhotoUrl)
});

The map function will loop over those photo objects and pass them to getFlickrPhotoUrl, which returns our URL. We’re all ready to display them!

Lets import the Gallery component from reapp and use it:

import Gallery from 'reapp-ui/components/Gallery';

In the render function, below the <p>No photos found!</p> block:

{!!photos.length &&
  <Gallery
    images={photos}
    width={window.innerWidth}
    height={window.innerHeight - 44}
  />
}

The Gallery widget takes these three properties and outputs fullscreen images that you can swipe between. With this in place, we have completed the flow of our app. Check out your browser and see it in action.

Note: Why window.innerHeigth - 44? We’re adjusting for the TitleBar height in our app. There are better ways we could do this, but for now this is simple and works well

Final touches

We’re just about good, but there’s a couple tweaks we can do. The gallery never lets us close it as it is now. If we add an onClose property to gallery though, it will let us. But, we’ll also need to update the state to reflect the gallery being closed. It’s actually pretty easy. Just add this to Gallery:

onClose={() => this.setState({ photos: [] })}

Also, our Input looks a little plain as it is. Lets add a border, margin and placeholder:

<Input ref="search" placeholder="Enter your search" styles={{
  input: {
    margin: '0 0 10px 0',
    border: '1px solid #ddd'
  }
}} />
finished-1
finished-2

Much better!

Final code

As is, our entire codebase fits into the ./components/App.jsx file. It’s easy to read and understand and uses some nice new features of ES6. Here it is:

import React from 'react';
import View from 'reapp-ui/views/View';
import Button from 'reapp-ui/components/Button';
import Input from 'reapp-ui/components/Input';
import Superagent from 'superagent';
import Gallery from 'reapp-ui/components/Gallery';

const MY_KEY = '__YOUR_KEY_HERE__';
const base = `https://api.flickr.com/services/rest/?api_key=${MY_KEY}&format=rest&format=json&nojsoncallback=1`;

export default React.createClass({
  getInitialState() {
    return {
      photos: []
    }
  },

  // see: https://www.flickr.com/services/api/misc.urls.html
  getFlickrPhotoUrl(image) {
    return `https://farm${image.farm}.staticflickr.com/${image.server}/${image.id}_${image.secret}.jpg`;
  },

  handleSearch() {
    let searchText = this.refs.search.getDOMNode().value;
    Superagent
      .get(`${base}&method=flickr.photos.search&text=${searchText}&per_page=10&page=1`, res => {
        if (res.status === 200 && res.body.photos)
          this.setState({
            photos: res.body.photos.photo.map(this.getFlickrPhotoUrl)
          });
      });
  },

  render() {
    var { photos } = this.state;

    return (
      <View title="Flickr Search" styles={{ inner: { padding: 20 } }}>

        <Input ref="search" placeholder="Enter your search" styles={{
          input: {
            margin: '0 0 10px 0',
            border: '1px solid #ddd'
          }
        }} />
        <Button onTap={this.handleSearch}>Search Images</Button>

        <div className="verticalCenter">
          {!photos.length &&
            <p>No photos!</p>
          }

          {!!photos.length &&
            <Gallery
              onClose={() => this.setState({ photos: [] })}
              images={photos}
              width={window.innerWidth}
              height={window.innerHeight - 44}
            />
          }
        </div>

      </View>
    );
  }
});

Next steps

We could keep going from here. We could display a list of images before, and link them to the gallery. Reapp also has docs on it’s components, so you can browse and add them as you need. Good examples of Reapp code include the Kitchen Sink demo and the Hacker News App they built.

Check out the code

If you’d like to see this application’s code you can clone this repo. It includes everything you need except a Flickr API key, which you’ll want to sign up for and insert before testing it out.

Steps to get the repo running:

  1. Install Node/npm, and Reapp: sudo npm install -g reapp
  2. Clone the repo: git clone git@github.com:reapp/flickr-demo
  3. Install dependencies: npm install
  4. Start server: reapp run
  5. View it in your browser at http://localhost:3010

You’ll probably want to explore the Reapp getting started docs and the individualUI widgets docs to keep you going.

Happy hacking!


Build A Real-Time Twitter Stream with Node and React.js

$
0
0

Introduction

Welcome to the second installation of Learning React, a series of articles focused on becoming proficient and effective with Facebook’s React library. If you haven’t read the first installation, Getting Started and Concepts, it is highly recommended that you do so before proceeding.

Today we are going to build an application in React using Isomorphic Javascript.

Iso-what?

Isomorphic. Javascript. It means writing one codebase that can run on both the server side and the client side.

This is the concept behind frameworks like Rendr, Meteor & Derby. You can also accomplish this using React, and today we are going to learn how.

Why is this awesome?

I’m an Angular fan just like everybody else, but one pain point is the potential SEO impact.

But I thought Google executes and indexes javascript?

Yeah, not really. They just give you an opportunity to serve up static HTML. You still have to generate that HTML with PhantomJS or a third party service.

Enter React.

react-site

React is amazing on the client side, but it’s ability to be rendered on the server side makes it truly special. This is because React uses a virtual DOM instead of the real one, and allows us to render our components to markup.

Getting Started

Alright gang, lets get down to brass tacks. We are going to build an app that shows tweets about this article, and loads new ones in real time. Here are the requirements:

  • It should listen to the Twitter streaming API and save new tweets as they come in.
  • On save, an event should be emitted to the client side that will update the views.
  • The page should render server side initially, and the client side should take it from there.
  • We should use infinity scroll pagination to load blocks of 10 tweets at a time.
  • New unread tweets should have a notification bar that will prompt the user to view them.

Here is a quick look at what we’ll be building. Make sure you check out the demoand to see everything happen in real time.

react-tweets-demo

Let’s take a look at some of the tools we are going to use besides React:

  • Express – A node.js web application framework
  • Handlebars – A templating language we are going to write our layout templates in
  • Browserify – A dependency bundler that will allow us to use CommonJS syntax
  • Mongoose – A mongoDB object modeling library
  • Socket.io – Real time bidirectional event based communication
  • nTwitter – Node.js Twitter API library

Server Side

Lets start by building out the server side of our app. Download the project fileshere, and follow along below:

DIRECTORY STRUCTURE

components/ // React Components Directory
---- Loader.react.js            // Loader Component
---- NotificationBar.react.js   // Notification Bar Component
---- Tweet.react.js             // Single Tweet Component
---- Tweets.react.js            // Tweets Component
---- TweetsApp.react.js         // Main App Component 
models/ // Mongoose Models Directory
---- Tweet.js // Our Mongoose Tweet Model
public/ // Static Files Directory
---- css
---- js
---- svg
utils/
----streamHandler.js // Utility method for handling Twitter stream callbacks
views/      // Server Side Handlebars Views
----layouts
-------- main.handlebars
---- home.handlebars
app.js      // Client side main
config.js   // App configuration
package.json
routes.js // Route definitions
server.js   // Server side main

PACKAGE.JSON

{
  "name": "react-isomorph",
  "version": "0.0.0",
  "description": "Isomorphic React Example",
  "main": "app.js",
  "scripts": {
    "watch": "watchify app.js -o public/js/bundle.js -v",
    "browserify": "browserify app.js | uglifyjs > public/js/bundle.js",
    "build": "npm run browserify ",
    "start": "npm run watch & nodemon server.js"
  },
  "author": "Ken Wheeler",
  "license": "MIT",
  "dependencies": {
    "express": "~4.9.7",
    "express-handlebars": "~1.1.0",
    "mongoose": "^3.8.17",
    "node-jsx": "~0.11.0",
    "ntwitter": "^0.5.0",
    "react": "~0.11.2",
    "socket.io": "^1.1.0"
  },
  "devDependencies": {
    "browserify": "~6.0.3",
    "nodemon": "^1.2.1",
    "reactify": "~0.14.0",
    "uglify-js": "~2.4.15",
    "watchify": "~2.0.0"
  },
  "browserify": {
    "transform": [
      "reactify"
    ]
  }
}

If you’re following along, simply run npm install and go get a glass of water. When you get back, we should have all of our dependencies in place, and its time to get our build on.

We now have a couple of commands we can use:

  • npm run watch – Running this command starts a watchify watch, so when we edit our js files, they get browserified on save.
  • npm run build – Running this command builds our bundle.js and minifies it for production
  • npm start – Running this command sets up a watch and runs our app via nodemon
  • node server – This command is what we use to run our app. In a production environment, I would recommend using something like forever or pm2.

Setting Up Our Server

For the purposes of keeping our focus on React, I am going to assume we are working with a working knowledge of Express based server configurations. If you aren’t familiar with what is going on below, you can read up on any of the helpful articles on this site about the subject, most notably ExpressJS 4.0 – New Features & Upgrading from 3.0

In the file below, we are doing 4 specific things:

  • Setting up a server via Express
  • Connecting to our MongoDB database
  • Initializing our socket.io connection
  • Creating our Twitter stream connection

SERVER.JS

// Require our dependencies
var express = require('express'),
  exphbs = require('express-handlebars'),
  http = require('http'),
  mongoose = require('mongoose'),
  twitter = require('ntwitter'),
  routes = require('./routes'),
  config = require('./config'),
  streamHandler = require('./utils/streamHandler');

// Create an express instance and set a port variable
var app = express();
var port = process.env.PORT || 8080;

// Set handlebars as the templating engine
app.engine('handlebars', exphbs({ defaultLayout: 'main'}));
app.set('view engine', 'handlebars');

// Disable etag headers on responses
app.disable('etag');

// Connect to our mongo database
mongoose.connect('mongodb://localhost/react-tweets');

// Create a new ntwitter instance
var twit = new twitter(config.twitter);

// Index Route
app.get('/', routes.index);

// Page Route
app.get('/page/:page/:skip', routes.page);

// Set /public as our static content dir
app.use("/", express.static(__dirname + "/public/"));

// Fire it up (start our server)
var server = http.createServer(app).listen(port, function() {
  console.log('Express server listening on port ' + port);
});

// Initialize socket.io
var io = require('socket.io').listen(server);

// Set a stream listener for tweets matching tracking keywords
twit.stream('statuses/filter',{ track: 'scotch_io, #scotchio'}, function(stream){
  streamHandler(stream,io);
});

nTwitter allows us to access the Twitter streaming API, so we use thestatuses/filter endpoint, along with the track property, to return tweets that use a #scotchio hash tag or mention scotch_io. You can modify this query to your liking by using the settings outlined within the Twitter Streaming API.

Models

In our app we use Mongoose to define our Tweet model. When receiving our data from our Twitter stream, we need somewhere to store it, and a static query method to return subsets of data based upon app parameters:

TWEET.JS

var mongoose = require('mongoose');

// Create a new schema for our tweet data
var schema = new mongoose.Schema({
    twid       : String
  , active     : Boolean
  , author     : String
  , avatar     : String
  , body       : String
  , date       : Date
  , screenname : String
});

// Create a static getTweets method to return tweet data from the db
schema.statics.getTweets = function(page, skip, callback) {

  var tweets = [],
      start = (page * 10) + (skip * 1);

  // Query the db, using skip and limit to achieve page chunks
  Tweet.find({},'twid active author avatar body date screenname',{skip: start, limit: 10}).sort({date: 'desc'}).exec(function(err,docs){

    // If everything is cool...
    if(!err) {
      tweets = docs;  // We got tweets
      tweets.forEach(function(tweet){
        tweet.active = true; // Set them to active
      });
    }

    // Pass them back to the specified callback
    callback(tweets);

  });

};

// Return a Tweet model based upon the defined schema
module.exports = Tweet = mongoose.model('Tweet', schema);

After defining our schema, we create a static method called getTweets. It takes 3 arguments, page, skip & callback.

When we have an application that not only renders server side, but has an active stream saving to the database behind the scenes, we need to create a way to make sure that when we request our next page of tweets, it takes into account that Tweets may have been added since the app has been running on the client.

This is where the skip argument comes into play. If we have 2 new tweets come in, and then request the next page, we need to skip 2 indexes forward so that our application’s pages are relative to it’s original count, and we don’t end up with duplicate tweets.

Stream Handling

When our Twitter stream connection sends a new Tweet event, we need a method to take that data, save it to our database, and emit an event to the client side with the tweet data:

STREAMHANDLER.JS

var Tweet = require('../models/Tweet');

module.exports = function(stream, io){

  // When tweets get sent our way ...
  stream.on('data', function(data) {

    // Construct a new tweet object
    var tweet = {
      twid: data['id'],
      active: false,
      author: data['user']['name'],
      avatar: data['user']['profile_image_url'],
      body: data['text'],
      date: data['created_at'],
      screenname: data['user']['screen_name']
    };

    // Create a new model instance with our object
    var tweetEntry = new Tweet(tweet);

    // Save 'er to the database
    tweetEntry.save(function(err) {
      if (!err) {
        // If everything is cool, socket.io emits the tweet.
        io.emit('tweet', tweet);
      }
    });

  });

};

We start by requiring our Model, and when our stream emits an event, we grab the data we want to save, save it, and emit our socket event to the client with the Tweet we just saved.

Routes

Our routes are where the majority of the magic is going to happen today. Lets take a look at routes.js:

ROUTES.JS

var JSX = require('node-jsx').install(),
  React = require('react'),
  TweetsApp = require('./components/TweetsApp.react'),
  Tweet = require('./models/Tweet');

module.exports = {

  index: function(req, res) {
    // Call static model method to get tweets in the db
    Tweet.getTweets(0,0, function(tweets, pages) {

      // Render React to a string, passing in our fetched tweets
      var markup = React.renderComponentToString(
        TweetsApp({
          tweets: tweets
        })
      );

      // Render our 'home' template
      res.render('home', {
        markup: markup, // Pass rendered react markup
        state: JSON.stringify(tweets) // Pass current state to client side
      });

    });
  },

  page: function(req, res) {
    // Fetch tweets by page via param
    Tweet.getTweets(req.params.page, req.params.skip, function(tweets) {

      // Render as JSON
      res.send(tweets);

    });
  }

}

In the code above, we have two specific requirements:

  • For our index route, we want to return a full page rendered from our React source
  • For our page route, we want to return a JSON string containing additional tweets based upon our params.

By requiring our React components, and calling the renderComponentToStringmethod, we are converting them to a string, which is then passed into ourhome.handlebars template.

We leverage our Tweets model to find tweets that have been stored in the database after coming in from our stream connection. Upon receiving the results of our query, we render our component to a String.

Notice that we are using non-JSX syntax when defining the component we want to render. This is because we are in our routes file and it is not being transformed.

Lets take a look at our render method:

// Render our 'home' template
res.render('home', {
    markup: markup, // Pass rendered react markup
    state: JSON.stringify(tweets) // Pass current state to client side
});

Not only are we passing our stringified markup, but we also pass a state property. In order for our server rendered application to pick up where it left off on the client, we need to pass the last state to the client so we can keep them in sync.

Templates

Our app has two main templates, both of which are ridiculously simple. We start with a layout view, which wraps our target template.

MAIN.HANDLEBARS

<!doctype html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>React Tweets</title>
    <link rel="stylesheet" type="text/css" href="css/style.css">
  </head>
  <body>
    {{{ body }}}
    <script src="https://cdn.socket.io/socket.io-1.1.0.js"></script>
    <script src="js/bundle.js"></script>
  </body>
</html>

{{{body}}} is where our template home.handlebars is loaded into. On this page we also add script tags for socket.io and our bundle.js that Browserify outputs.

HOME.HANDLEBARS

<section id="react-app">{{{ markup }}}</div>
<script id="initial-state" type="application/json">{{{state}}}</script>

In our home.handlebars template, we take the component markup that we generated in our routes, and insert at {{{markup}}}.

Directly below we transfer our state. We use a script tag to hold a JSON string of our server’s state. When initializing our React components on the client side, we pull our state from here and then remove it.

Client Side Rendering

On the server we use renderComponentToString to generate markup for our components, but when using Browserify, we need a client side entry point to pick up the state we just saved, and mount our application component.

APP.JS

/** @jsx React.DOM */

var React = require('react');
var TweetsApp = require('./components/TweetsApp.react');

// Snag the initial state that was passed from the server side
var initialState = JSON.parse(document.getElementById('initial-state').innerHTML)

// Render the components, picking up where react left off on the server
React.renderComponent(
  <TweetsApp tweets={initialState}/>,
  document.getElementById('react-app')
);

We start by getting our intitial state from the script element that we added inhome.handlebars. We parse the JSON data and then call React.renderComponent.

Because we are working with a file that will be bundled with Browserify and will have access to JSX transforms, we can use JSX syntax when passing our component as an argument.

We initialize our component by adding the state we just grabbed to an attribute on our component. This makes it available via this.props within our component’s methods.

Finally, our second argument mounts our rendered component to our #react-appdiv element from home.handlebars.

Isomorphic Components

Now that we have all of our setup out of the way, it is time to get down to business. In our previous files, we rendered a custom component named TweetsApp.

Let’s create our TweetsApp class.

module.exports = TweetsApp = React.createClass({
  // Render the component
  render: function(){

    return (
      <div className="tweets-app">
        <Tweets tweets={this.state.tweets} />
        <Loader paging={this.state.paging}/>
        <NotificationBar count={this.state.count} onShowNewTweets={this.showNewTweets}/>
      </div>
    )

  }
});

Our app is going to have 4 child components. We need a Tweets display, A singular Tweet, a loading spinner for loading paged results, and a notification bar. We wrap them in a div element with the tweets-app class.

react-tweets-demo

Very similarly to the way we passed our state via component props when transferring our server’s state, we pass our current state down to the child components via props.

But where does the state come from?

In React, setting state via props is generally considered an anti-pattern. However when setting an initial state, and transferring a state from the server, this is not the case. Because the getInitialState method is only called before the first mount of our component, we need to use the componentWillReceiveProps method to make sure that if we mount our component again, that it will receive the state:

// Set the initial component state
  getInitialState: function(props){

    props = props || this.props;

    // Set initial application state using props
    return {
      tweets: props.tweets,
      count: 0,
      page: 0,
      paging: false,
      skip: 0,
      done: false
    };

  },

  componentWillReceiveProps: function(newProps, oldProps){
    this.setState(this.getInitialState(newProps));
  },

Aside from our tweets, which we pass down from the server, our state on the client contains some new properties. We use the count property to track how many tweets are currently unread. Unread tweets are ones that have been loaded via socket.io, that came in after the page loaded, but are not active yet. This resets every time we call showNewTweets.

The page property keeps track of how many pages we have currently loaded from the server. When starting a page load, in between the event kicking off, and when the results are rendered to the page, our paging property is set to true, preventing the event from starting again until the current request has completed. The done property is set to true when we have run out of pages.

Our skip property is like count, but never gets reset. This gives us a value for how many tweets are now in the database that we need to skip because our initial load doesn’t account for them. This prevent us from rendering duplicate tweets to the page.

As it stands, we are good to go on the server side rendering of our component. However, our client side is where our state changes from UI interaction and socket events, so we need to set up some methods to handle that.

We can use the componentDidMount method to accomplish this safely, because it only runs when a component is mounted on the client:

// Called directly after component rendering, only on client
componentDidMount: function(){

  // Preserve self reference
  var self = this;

  // Initialize socket.io
  var socket = io.connect();

  // On tweet event emission...
  socket.on('tweet', function (data) {

      // Add a tweet to our queue
      self.addTweet(data);

  });

  // Attach scroll event to the window for infinity paging
  window.addEventListener('scroll', this.checkWindowScroll);

},

In the code above, we set up two event listeners to modify the state and subsequent rendering of our components. The first is our socket listener. When a new tweet is emitted, we call our addTweet method to add it to an unread queue.

// Method to add a tweet to our timeline
  addTweet: function(tweet){

    // Get current application state
    var updated = this.state.tweets;

    // Increment the unread count
    var count = this.state.count + 1;

    // Increment the skip count
    var skip = this.state.skip + 1;

    // Add tweet to the beginning of the tweets array
    updated.unshift(tweet);

    // Set application state
    this.setState({tweets: updated, count: count, skip: skip});

  },

Tweets in the unread queue are on the page, but not shown until the user acknowledges them in the NotificationBar component. When they do, an event is passed back via onShowNewTweets which calls our showNewTweets method:

// Method to show the unread tweets
  showNewTweets: function(){

    // Get current application state
    var updated = this.state.tweets;

    // Mark our tweets active
    updated.forEach(function(tweet){
      tweet.active = true;
    });

    // Set application state (active tweets + reset unread count)
    this.setState({tweets: updated, count: 0});

  },

This method loops through our tweets and sets their active property to true, and then sets our state. This makes any unshown tweets now show (via CSS).

Our second event listens to the window scroll event, and fires ourcheckWindowScroll event to check whether we should load a new page.

// Method to check if more tweets should be loaded, by scroll position
  checkWindowScroll: function(){

    // Get scroll pos & window data
    var h = Math.max(document.documentElement.clientHeight, window.innerHeight || 0);
    var s = document.body.scrollTop;
    var scrolled = (h + s) > document.body.offsetHeight;

    // If scrolled enough, not currently paging and not complete...
    if(scrolled && !this.state.paging && !this.state.done) {

      // Set application state (Paging, Increment page)
      this.setState({paging: true, page: this.state.page + 1});

      // Get the next page of tweets from the server
      this.getPage(this.state.page);

    }
  },

In our checkWindowScroll method, if we have reached the bottom of the page, aren’t currently in the paging process, and haven’t reached the last page, we call our getPage method:

// Method to get JSON from server by page
  getPage: function(page){

    // Setup our ajax request
    var request = new XMLHttpRequest(), self = this;
    request.open('GET', 'page/' + page + "/" + this.state.skip, true);
    request.onload = function() {

      // If everything is cool...
      if (request.status >= 200 && request.status < 400){

        // Load our next page
        self.loadPagedTweets(JSON.parse(request.responseText));

      } else {

        // Set application state (Not paging, paging complete)
        self.setState({paging: false, done: true});

      }
    };

    // Fire!
    request.send();

  },

In this method we pass our incremented page index, along with our skip property of our state object to our /page route. If there are no more tweets, we set pagingto false and done to true, ending our ability to page.

If tweets are returned, we will return JSON data based upon the given arguments, which we then load with the loadPagedTweets method:

// Method to load tweets fetched from the server
  loadPagedTweets: function(tweets){

    // So meta lol
    var self = this;

    // If we still have tweets...
    if(tweets.length > 0) {

      // Get current application state
      var updated = this.state.tweets;

      // Push them onto the end of the current tweets array
      tweets.forEach(function(tweet){
        updated.push(tweet);
      });

      // This app is so fast, I actually use a timeout for dramatic effect
      // Otherwise you'd never see our super sexy loader svg
      setTimeout(function(){

        // Set application state (Not paging, add tweets)
        self.setState({tweets: updated, paging: false});

      }, 1000);

    } else {

      // Set application state (Not paging, paging complete)
      this.setState({done: true, paging: false});

    }
  },

This method takes our current set of tweets in our state object, and pushes our new tweets onto the end. I use a setTimeout before calling setState, so that we can actually see the loader component for at least a little while.

Check out our finished component below:

TWEETSAPP

/** @jsx React.DOM */

var React = require('react');
var Tweets = require('./Tweets.react.js');
var Loader = require('./Loader.react.js');
var NotificationBar = require('./NotificationBar.react.js');

// Export the TweetsApp component
module.exports = TweetsApp = React.createClass({

  // Method to add a tweet to our timeline
  addTweet: function(tweet){

    // Get current application state
    var updated = this.state.tweets;

    // Increment the unread count
    var count = this.state.count + 1;

    // Increment the skip count
    var skip = this.state.skip + 1;

    // Add tweet to the beginning of the tweets array
    updated.unshift(tweet);

    // Set application state
    this.setState({tweets: updated, count: count, skip: skip});

  },

  // Method to get JSON from server by page
  getPage: function(page){

    // Setup our ajax request
    var request = new XMLHttpRequest(), self = this;
    request.open('GET', 'page/' + page + "/" + this.state.skip, true);
    request.onload = function() {

      // If everything is cool...
      if (request.status >= 200 && request.status < 400){

        // Load our next page
        self.loadPagedTweets(JSON.parse(request.responseText));

      } else {

        // Set application state (Not paging, paging complete)
        self.setState({paging: false, done: true});

      }
    };

    // Fire!
    request.send();

  },

  // Method to show the unread tweets
  showNewTweets: function(){

    // Get current application state
    var updated = this.state.tweets;

    // Mark our tweets active
    updated.forEach(function(tweet){
      tweet.active = true;
    });

    // Set application state (active tweets + reset unread count)
    this.setState({tweets: updated, count: 0});

  },

  // Method to load tweets fetched from the server
  loadPagedTweets: function(tweets){

    // So meta lol
    var self = this;

    // If we still have tweets...
    if(tweets.length > 0) {

      // Get current application state
      var updated = this.state.tweets;

      // Push them onto the end of the current tweets array
      tweets.forEach(function(tweet){
        updated.push(tweet);
      });

      // This app is so fast, I actually use a timeout for dramatic effect
      // Otherwise you'd never see our super sexy loader svg
      setTimeout(function(){

        // Set application state (Not paging, add tweets)
        self.setState({tweets: updated, paging: false});

      }, 1000);

    } else {

      // Set application state (Not paging, paging complete)
      this.setState({done: true, paging: false});

    }
  },

  // Method to check if more tweets should be loaded, by scroll position
  checkWindowScroll: function(){

    // Get scroll pos & window data
    var h = Math.max(document.documentElement.clientHeight, window.innerHeight || 0);
    var s = document.body.scrollTop;
    var scrolled = (h + s) > document.body.offsetHeight;

    // If scrolled enough, not currently paging and not complete...
    if(scrolled && !this.state.paging && !this.state.done) {

      // Set application state (Paging, Increment page)
      this.setState({paging: true, page: this.state.page + 1});

      // Get the next page of tweets from the server
      this.getPage(this.state.page);

    }
  },

  // Set the initial component state
  getInitialState: function(props){

    props = props || this.props;

    // Set initial application state using props
    return {
      tweets: props.tweets,
      count: 0,
      page: 0,
      paging: false,
      skip: 0,
      done: false
    };

  },

  componentWillReceiveProps: function(newProps, oldProps){
    this.setState(this.getInitialState(newProps));
  },

  // Called directly after component rendering, only on client
  componentDidMount: function(){

    // Preserve self reference
    var self = this;

    // Initialize socket.io
    var socket = io.connect();

    // On tweet event emission...
    socket.on('tweet', function (data) {

        // Add a tweet to our queue
        self.addTweet(data);

    });

    // Attach scroll event to the window for infinity paging
    window.addEventListener('scroll', this.checkWindowScroll);

  },

  // Render the component
  render: function(){

    return (
      <div className="tweets-app">
        <Tweets tweets={this.state.tweets} />
        <Loader paging={this.state.paging}/>
        <NotificationBar count={this.state.count} onShowNewTweets={this.showNewTweets}/>
      </div>
    )

  }

});

Child Components

Our main component uses 4 child components to compose an interface based upon our current state values. Lets review them and how they work with their parent component:

TWEETS

/** @jsx React.DOM */

var React = require('react');
var Tweet = require('./Tweet.react.js');

module.exports = Tweets = React.createClass({

  // Render our tweets
  render: function(){

    // Build list items of single tweet components using map
    var content = this.props.tweets.map(function(tweet){
      return (
        <Tweet key={tweet.twid} tweet={tweet} />
      )
    });

    // Return ul filled with our mapped tweets
    return (
      <ul className="tweets">{content}</ul>
    )

  }

});

Our Tweets component is passed our current state’s tweets via its tweets prop and is used to render our tweets. In our render method, we build a list of tweets by executing the map method on our array of tweets. Each iteration creates a new rendering of a child Tweet component, and the results are inserted into an unordered list.

TWEET

/** @jsx React.DOM */

var React = require('react');

module.exports = Tweet = React.createClass({
  render: function(){
    var tweet = this.props.tweet;
    return (
      <li className={"tweet" + (tweet.active ? ' active' : '')}>
        <img src={tweet.avatar} className="avatar"/>
        <blockquote>
          <cite>
            <a href={"http://www.twitter.com/" + tweet.screenname}>{tweet.author}</a>
            <span className="screen-name">@{tweet.screenname}</span>
          </cite>
          <span className="content">{tweet.body}</span>
        </blockquote>
      </li>
    )
  }
});

Our singular Tweet component renders each individual tweet as a list item. We conditionally render an active class based upon the tweet’s active status, that helps us hide it while it is still in the queue.

Each tweet’s data is then used to fill in the predefined tweet template, so that our tweet display looks legit.

NOTIFICATIONBAR

/** @jsx React.DOM */

var React = require('react');

module.exports = NotificationBar = React.createClass({
  render: function(){
    var count = this.props.count;
    return (
      <div className={"notification-bar" + (count > 0 ? ' active' : '')}>
        <p>There are {count} new tweets! <a href="#top" onClick={this.props.onShowNewTweets}>Click here to see them.</a></p>
      </div>
    )
  }
});

Our Notification Bar is fixed to the top of the page, and displays the current count of unread tweets, and when clicked, shows all the tweets currently in the queue.

We conditionally display an active class based upon whether we actually have any unread tweets, using the count prop.

On our anchor tag, an onClick handler calls our components own proponShowNewTweets which is bound to showNewTweets in it’s parent. This allows us to pass the event back upwards so it can be handled in our parent component, where we keep our state management.

LOADER

/** @jsx React.DOM */

var React = require('react');

module.exports = Loader = React.createClass({
  render: function(){
    return (
      <div className={"loader " + (this.props.paging ? "active" : "")}>
        <img src="svg/loader.svg" />
      </div>
    )
  }
});

Our loader component is a fancy svg loading animation. It is used during paging to indicate that we are loading a new page. An active class is set using our pagingprop, that controls whether our component is shown or not (via CSS).

Wrap Up

All that’s left to do now is to run node server on your command line! You can run this locally or just check out the live demo below. If you want to see a tweet come in live, the easiest way is to just share this article with the demo open and you can see it in real time!

react-tweets-demo-2

In the next installment of Learning React, we will be learning how to leverage Facebook’s Flux Architecture to enforce unidirectional data flow. Flux is Facebook’s recommended complementary architecture for React applications. We will also be reviewing some open source Flux libraries that make implementing the Flux architecture a breeze.

Look for it soon!


The React.js Way

$
0
0

The React.js Way: Getting Started Tutorial

Update: the second part is out! Learn more about the React.js Way in the second part of the series: Flux Architecture with Immutable.js.

Now that the popularity of React.js is growing blazing fast and lots of interesting stuff are coming, my friends and colleagues started asking me more about how they can start with React and how they should think in the React way.

React.js Tutorial Google Trends(Google search trends for React in programming category, Initial public release: v0.3.0, May 29, 2013)

However, React is not a framework; there are concepts, libraries and principles that turn it into a fast, compact and beautiful way to program your app on the client and server side as well.

In this three-part blog series React.js tutorial I am going to explain these concepts and give a recommendation on what to use and how. We will cover ideas and technologies like:

  • ES6 React
  • virtual DOM
  • Component-driven development
  • Immutability
  • Top-down rendering
  • Rendering path and optimization
  • Common tools/libs for bundling, ES6, request making, debugging, routing, etc.
  • Isomorphic React

And yes we will write code. I would like to make it as practical as possible.
All the snippets and post related code are available in the RisingStack GitHub repository .

This article is the first from those three. Let’s jump in!

Repository:
https://github.com/risingstack/react-way-getting-started

1. Getting Started with the React.js Tutorial

If you are already familiar with React and you understand the basics, like the concept of virtual DOM and thinking in components, then this React.js tutorial is probably not for you. We will discuss intermediate topics in the upcoming parts of this serie. It will be fun, I recommend you to check back later.

Is React a framework?

In a nutshell: no, it’s not.
Then what the hell is it and why everybody is so keen to start using it?

React is the “View” in the application, a fast one. It also provides different ways to organize your templates and gets you think in components. In a React application, you should break down your site, page or feature into smaller pieces of components. It means that your site will be built by the combination of different components. These components are also built on the top of other components and so on. When a problem becomes challenging, you can break it down into smaller ones and solve it there. You can also reuse it somewhere else later. Think of it like the bricks of Lego. We will discuss component-driven development more deeply in this article later.

React also has this virtual DOM thing, what makes the rendering super fast but still keeps it easily understandable and controllable at the same time. You can combine this with the idea of components and have the power of top-down rendering. We will cover this topic in the second article.

Ok I admit, I still didn’t answer the question. We have components and fast rendering – but why is it a game changer? Because React is mainly a concept and a library just secondly.
There are already several libraries following these ideas – doing it faster or slower – but slightly different. Like every programming concept, React has it’s own solutions, tools and libraries turning it into an ecosystem. In this ecosystem, you have to pick your own tools and build your own ~framework. I know it sounds scary but believe me, you already know most of these tools, we will just connect them to each other and later you will be very surprised how easy it is. For example for dependencies we won’t use any magic, rather Node’s require and npm. For the pub-sub, we will use Node’s EventEmitter and as so on.

(Facebook announced Relay their framework for React at the React.js Conf in January 2015. But it’s not available yet. The date of the first public release is unknown.)

Are you excited already? Let’s dig in!

The Virtual DOM concept in a nutshell

To track down model changes and apply them on the DOM (alias rendering) we have to be aware of two important things:

  1. when data has changed,
  2. which DOM element(s) to be updated.

For the change detection (1) React uses an observer model instead of dirty checking(continuous model checking for changes). That’s why it doesn’t have to calculate what is changed, it knows immediately. It reduces the calculations and make the app smoother. But the really cool idea here is how it manages the DOM manipulations:

For the DOM changing challenge (2) React builds the tree representation of the DOM in the memory and calculates which DOM element should change. DOM manipulation is heavy, and we would like to keep it at the minimum. Luckily, React tries to keep as much DOM elements untouched as possible. Given the less DOM manipulation can be calculated faster based on the object representation, the costs of the DOM changes are reduced nicely.

Since React’s diffing algorithm uses the tree representation of the DOM and re-calculates all subtrees when its’ parent got modified (marked as dirty), you should be aware of your model changes, because the whole subtree will be re-rendered then.
Don’t be sad, later we will optimize this behavior together. (spoiler: with shouldComponentUpdate() and ImmutableJS)

React.js Tutorial React re-render(source: React’s diffing algorithmChristopher Chedeau)

How to render on the server too?

Given the fact, that this kind of DOM representation uses fake DOM, it’s possible to render the HTML output on the server side as well (without JSDom, PhantomJS etc.). React is also smart enough to recognize that the markup is already there (from the server) and will add only the event handlers on the client side. This will be very useful in the third article where we will write an isomorphic application with React.

Interesting: React’s rendered HTML markup contains data-reactid attributes, which helps React tracking DOM nodes.

Useful links, other virtual DOM libraries

Component-driven development

It was one of the most difficult parts for me to pick up when I was learning React.
In the component-driven development, you won’t see the whole site in one template.
In the beginning you will probably think that it sucks. But I’m pretty sure that later you will recognize the power of thinking in smaller pieces and work with less responsibility. It makes things easier to understand, to maintain and to cover with tests.

How should I imagine it?

Check out the picture below. This is a possible component breakdown of a feature/site. Each of the bordered areas with different colors represents a single type of component. According to this, you have the following component hierarchy:

  • FilterableProductTable

What should a component contain?

First of all it’s wise to follow the single responsibility principle and ideally, design your components to be responsible for only one thing. When you start to feel you are not doing it right anymore with your component, you should consider breaking it down into smaller ones.

Since we are talking about component hierarchy, your components will use other components as well. But let’s see the code of a simple component in ES5:

var HelloComponent = React.createClass({
    render: function() {
        return <div>Hello {this.props.name}</div>;
    }
});

But from now on, we will use ES6. ;)
Let’s check out the same component in ES6:

class HelloComponent extends React.Component {
  render() {
    return <div>Hello {this.props.name}</div>;
  }
}

JS, JSX

As you can see, our component is a mix of JS and HTML codes. Wait, what? HTML in my JavaScript? Yes, probably you think it’s strange, but the idea here is to have everything in one place. Remember, single responsibility. It makes a component extremely flexible and reusable.

In React, it’s possible to write your component in pure JS like:

  render () {
    return React.createElement("div", null, "Hello ",
        this.props.name);
  }

But I think it’s not very comfortable to write your HTML in this way. Luckily we can write it in a JSX syntax (JavaScript extension) which let us write HTML inline:

  render () {
    return <div>Hello {this.props.name}</div>;
  }

What is JSX?
JSX is a XML-like syntax extension to ECMAScript. JSX and HTML syntax are similar but it’s different at some point. For example the HTML class attribute is called className in JSX. For more differences and gathering deeper knowledge check out Facebook’s HTML Tags vs. React Components guide.

Because JSX is not supported in browsers by default (maybe someday) we have to compile it to JS. I’ll write about how to use JSX in the Setup section later. (by the way Babel can also transpile JSX to JS).

Useful links about JSX:
JSX in depth
Online JSX compiler
Babel: How to use the react transformer.

What else can we add?

Each component can have an internal state, some logic, event handlers (for example: button clicks, form input changes) and it can also have inline style. Basically everything what is needed for proper displaying.

You can see a {this.props.name} at the code snippet. It means we can pass properties to our components when we are building our component hierarchy. Like: <MyComponent name="John Doe" />
It makes the component reusable and makes it possible to pass our application state from the root component to the child components down, through the whole application, always just the necessary part of the data.

Check this simple React app snippet below:

class UserName extends React.Component {
  render() {
    return <div>name: {this.props.name}</div>;
  }
}

class User extends React.Component {
  render() {
    return <div>
        <h1>City: {this.props.user.city}</h1>
        <UserName name={this.props.user.name} />
      </div>;
  }
}

var user = { name: 'John', city: 'San Francisco' };
React.render(<User user={user} />, mountNode);

Useful links for building components:
Thinking in React

React loves ES6

ES6 is here and there is no better place for trying it out than your new shiny React project.

React wasn’t born with ES6 syntax, the support came this year January, in version v0.13.0.

However the scope of this article is not to explain ES6 deeply; we will use some features from it, like classes, arrows, consts and modules. For example, we will inherit our components from theReact.Component class.

Given ES6 is supported partly by browsers, we will write our code in ES6 but transpile it to ES5 later and make it work with every modern browser even without ES6 support.
To achieve this, we will use the Babel transpiler. It has a nice compact intro about the supported ES6 features, I recommend to check it out: Learn ES6

Useful links about ES6
Babel: Learn ES6
React ES6 announcement

Bundling with Webpack and Babel

I mentioned earlier that we will involve tools you are already familiar with and build our application from the combination of those. The first tool what might be well known is the Node.js‘s module system and it’s package manager, npm. We will write our code in the “node style” and require everything what we need. React is available as a single npm package.
This way our component will look like this:

// would be in ES5: var React = require('react/addons');
import React from 'react/addons';

class MyComponent extends React.Component { ... }

// would be in ES5: module.exports = MyComponent;
export default MyComponent;

We are going to use other npm packages as well.
Most npm packages make sense on the client side as well,
for example we will use debug for debugging and superagent for composing requests.

Now we have a dependency system by Node (accurately ES6) and we have a solution for almost everything by npm. What’s next? We should pick our favorite libraries for our problems and bundle them up in the client as a single codebase. To achieve this, we need a solution for making them run in the browser.

This is the point where we should pick a bundler. One of the most popular solutions today areBrowserify and Webpack projects. Now we are going to use Webpack, because my experience is that Webpack is more preferred by the React community. However, I’m pretty sure that you can do the same with Browserify as well.

How does it work?

Webpack bundles our code and the required packages into the output file(s) for the browser. Since we are using JSX and ES6 which we would like to transpile to ES5 JS, we have to place the JSX and ES6 to ES5 transpiler into this flow as well. Actually, Babel can do the both for us. Let’s just use that!

We can do that easily because Webpack is configuration-oriented

What do we need for this? First we need to install the necessary modules (starts with npm initif you don’t have the package.json file yet).

Run the following commands in your terminal (Node.js or IO.js and npm is necessary for this step):

npm install --save-dev webpack
npm install --save-dev babel
npm install --save-dev babel-loader

After we created the webpack.config.js file for Webpack (It’s ES5, we don’t have the ES6 transpiler in the webpack configuration file):

var path = require('path');

module.exports = {
  entry: path.resolve(__dirname, '../src/client/scripts/client.js'),
  output: {
    path: path.resolve(__dirname, '../dist'),
    filename: 'bundle.js'
  },

  module: {
    loaders: [
      {
        test: /src\/.+.js$/,
        exclude: /node_modules/,
        loader: 'babel'
      }
    ]
  }
};

If we did it right, our application starts at ./src/scripts/client/client.js and goes to the ./dist/bundle.js for the command webpack.

After that, you can just include the bundle.js script into your index.html and it should work:
<script src="bundle.js"></script>

(Hint: you can serve your site with node-static install the module with, npm install -g node-static and start with static . to serve your folder’s content on the address: 127.0.0.1:8080.)

Project setup

Now we have installed and configured Webpack and Babel properly.
As in every project, we need a project structure.

Folder structure

I prefer to follow the project structure below:

config/
    app.js
    webpack.js (js config over json -> flexible)
src/
  app/ (the React app: runs on server and client too)
    components/
      __tests__ (Jest test folder)
      AppRoot.jsx
      Cart.jsx
      Item.jsx
    index.js (just to export app)
    app.js
  client/  (only browser: attach app to DOM)
    styles/
    scripts/
      client.js
    index.html
  server/
    index.js
    server.js
.gitignore
.jshintrc
package.json
README.md

The idea behind this structure is to separate the React app from the client and server code. Since our React app can run on both client and server side (=isomorphic app, we will dive deep into this in an upcoming blog post).

How to test my React app

When we are moving to a new technology, one of the most important questions should be testability. Without a good test coverage, you are playing with fire.

Ok, but which testing framework to use?
My experience is that testing a front end solution always works best with the test framework by the same creators. According to this I started to test my React apps with Jest. Jest is a test framework by Facebook and has many great features that I won’t cover in this article.

I think it’s more important to talk about the way of testing a React app. Luckily the single responsibility forces our components to do only one thing, so we should test only that thing. Pass the properties to our component, trigger the possible events and check the rendered output. Sounds easy, because it is.

For more practical example, I recommend checking out the Jest React.js tutorial.

Test JSX and ES6 files

To test our ES6 syntax and JSX files, we should transform them for Jest. Jest has a config variable where you can define a preprocessor (scriptPreprocessor) for that.
First we should create the preprocessor and after that pass the path to it for Jest. You can find a working example for a Babel Jest preprocessor in our repository.

Jet’s also has an example for React ES6 testing.

(The Jest config goes to the package json.)

Takeaway

In this article, we examined together why React is fast and scalable but how different its approach is. We went through how React handles the rendering and what the component-driven development is and how should you set up and organize your project. These are the very basics.

In the upcoming “The React way” articles we are going to dig deeper.

I still believe that the best way to learn a new programming approach is to start develop and write code.
That’s why I would like to ask you to write something awesome and also spend some time to check out the offical React website, especially the guides section. Excellent resource, the Facebook developers, and the React community did an awesome job with it.

Next up

If you liked this article, subscribe to our newsletter. The remaining part of the The React waypost series are coming soon. We will cover interesting topics like:

  • immutability
  • top-down rendering
  • Flux
  • isomorphic way (common app on client and server)

See you soon and check out the repository until that!
https://github.com/RisingStack/react-way-getting-started

The React.js Way: Flux Architecture with Immutable.js

This article is the second part of the “The React.js Way” blog series. If you are not familiar with the basics, I strongly recommend you to read the first article: The React.js Way: Getting Started Tutorial.

In the previous article, we discussed the concept of the virtual DOM and how to think in the component way. Now it’s time to combine them into an application and figure out how these components should communicate with each other.

Components as functions

The really cool thing in a single component is that you can think about it like a function in JavaScript. When you call a function with parameters, it returns a value. Something similar happens with a React.js component: you pass properties, and it returns with the rendered DOM. If you pass different data, you will get different responses. This makes them extremely reusable and handy to combine them into an application. This idea comes from functional programming that is not in the scope of this article. If you are interested, I highly recommend reading Mikael Brevik’s Functional UI and Components as Higher Order Functions blog post to have a deeper understanding on the topic.

Top-down rendering

Ok it’s cool, we can combine our components easily to form an app, but it doesn’t make any sense without data. We discussed last time that with React.js your app’s structure is a hierarchy that has a root node where you can pass the data as a parameter, and see how your app responds to it through the components. You pass the data at the top, and it goes down from component to component: this is called top-down rendering.

React.js component hierarchy

It’s great that we pass the data at the top, and it goes down via component’s properties, but how can we notify the component at a higher level in the hierarchy if something should change? For example, when the user pressed a button?
We need something that stores the actual state of our application, something that we can notify if the state should change. The new state should be passed to the root node, and the top-down rendering should be kicked in again to generate (re-render) the new output (DOM) of our application. This is where Flux comes into the picture.

Flux architecture

You may have already heard about Flux architecture and the concept of it.
I’m not going to give a very detailed overview about Flux in this article; I’ve already done it earlier in the Flux inspired libraries with React post.

Application architecture for building user interfaces – Facebook flux

A quick reminder: Flux is a unidirectional data flow concept where you have a Store which contains the actual state of your application as pure data. It can emit events when it’s changed and let your application’s components know what should be re-rendered. It also has a Dispatcher which is a centralized hub and creates a bridge between your app and the Store. It has actions that you can call from your app, and it emits events for the Store. The Store is subscribed for those events and change its internal state when it’s necessary. Easy, right? ;)

Flux arhitecture

PureRenderMixin

Where are we with our current application? We have a data store that contains the actual state. We can communicate with this store and pass data to our app that responds for the incoming state with the rendered DOM. It’s really cool, but sounds like lot’s of rendering: (it is). Remember component hierarchy and top-down rendering – everything responds to the new data.

I mentioned earlier that virtual DOM optimizes the DOM manipulations nicely, but it doesn’t mean that we shouldn’t help it and minimize its workload. For this, we have to tell the component that it should be re-rendered for the incoming properties or not, based on the new and the current properties. In the React.js lifecycle you can do this with the shouldComponentUpdate.

React.js luckily has a mixin called PureRenderMixin which compares the new incoming properties with the previous one and stops rendering when it’s the same. It uses the shouldComponentUpdate method internally.
That’s nice, but PureRenderMixin can’t compare objects properly. It checks reference equality(===) which will be false for different objects with the same data:

boolean shouldComponentUpdate(object nextProps, object nextState)

If shouldComponentUpdate returns false, then render() will be skipped until the next state change.(In addition, componentWillUpdate and componentDidUpdate will not be called.)

var a = { foo: 'bar' };
var b = { foo: 'bar' };

a === b; // false

The problem here is that the components will be re-rendered for the same data if we pass it as a new object (because of the different object reference). But it also not gonna fly if we change the original Object because:

var a = { foo: 'bar' };
var b = a;
b.foo = 'baz';
a === b; // true

Sure it won’t be hard to write a mixin that does deep object comparisons instead of reference checking, but React.js calls shouldComponentUpdate frequently and deep checking is expensive: you should avoid it.

I recommend to check out the advanced Performance with React.js article by Facebook.

Immutability

The problem starts escalating quickly if our application state is a single, big, nested object like our Flux store.
We would like to keep the object reference the same when it doesn’t change and have a new object when it is. This is exactly what Immutable.js does.

Immutable data cannot be changed once created, leading to much simpler application development, no defensive copying, and enabling advanced memoization and change detection techniques with simple logic.

Check the following code snippet:

var stateV1 = Immutable.fromJS({
  users: [
    { name: 'Foo' },
    { name: 'Bar' }
  ]
});

var stateV2 = stateV1.updateIn(['users', 1], function () {
  return Immutable.fromJS({
    name: 'Barbar'
  });
});

stateV1 === stateV2; // false
stateV1.getIn(['users', 0]) === stateV2.getIn(['users', 0]); // true
stateV1.getIn(['users', 1]) === stateV2.getIn(['users', 1]); // false

As you can see we can use === to compare our objects by reference, which means that we have a super fast way for object comparison, and it’s compatible with React’s PureRenderMixin. According to this we should write our entire application with Immutable.js. Our Flux Store should be an immutable object, and we pass immutable data as properties to our applications.

Now let’s go back to the previous code snippet for a second and imagine that our application component hierarchy looks like this:

User state

You can see that only the red ones will be re-rendered after the change of the state because the others have the same reference as before. It means the root component and one of the users will be re-rendered.

With immutability, we optimized the rendering path and supercharged our app. With virtual DOM, it makes the “React.js way” to a blazing fast application architecture.

Learn more about how persistent immutable data structures work and watch the Immutable Data and React talk from the React.js Conf 2015.

Check out the example repository with a ES6, flux architecture, and immutable.js:
https://github.com/RisingStack/react-way-immutable-flux



How to build your own Quadcopter Autopilot / Flight Controller

$
0
0

Back to my home page

How to build your own Quadcopter Autopilot / Flight ControllerBy Dr Gareth Owen (gho-quad2@ghowen.me)
note: I get a lot of e-mails about this article so regretfully I’m often not able to respond unless I can give a quick answer.3DR QuadcopterFig 1: 3DR Quadcopter

Contents

Introduction

This article will walk you through building your own controller whilst teaching you the details of how it works. This information is hard to find, particarly for those of us who are not aerospace engineers! Personally, it took me six months because much of my time was spent bug hunting and tuning, but with this article you can achieve the same in anywhere from a few hours to a few days. I’ll teach you the pitfalls so that you don’t waste your time like I did.

ArduPilot hardwareFig 2: ArduPilot hardwareThe first shortcut is your choice of hardware. I chose to build my own from scratch at a stage when I knew nothing of RC or how to fly – this was a mistake. I thought that I would save a few pennies by doing it myself but after lots of accidental short circuits, new microchips and sensors, I’ve spent a fortune! So do yourself a favour and buy the ArduPilot 2.5 control board, wire up your copter, learn RC, and how to fly, and then come back here. The board is essentially just an Arduino with some sensors connected which we will program in this article with our own software – by using it you have everything connected you’ll need to get flying – you’ll also be able to play with the excellent ArduCopter software.

The ArduPilot project is sponsored by 3D Robotics – this means that they build the hardware and sell it for a small profit, and then feed some of this profit back to the community. The hardware and software is entirely open source and anyone is free to copy it. You can buy the original from them direct, or identical copies from Hobbyking (named HKPilot) and RCTimer (named ArduFlyer).

In this article, I am going to assume you have the ArduPilot hardware which is essentially an Arduino with attached sensors. If you choose to ignore my advice and build your own hardware, or use the arduino board, then you’ll need to replace the lower level code (the HAL library). I’m also going to assume you have a quadcopter in X configuration – although not a lot of work is required (just different motor mixing) to switch between +/X and octa/hexacopters, they won’t be given it any substantial attention in the article. Ideally, you’ve already flown your quad with the ArduCopter code loaded and hence you should have your motors connected as follows and spinning in the direction shown.

Propellor ConfigurationFig 3: Propellor ConfigurationI’m also going to assume you have some experience with the arduino – or atleast with C/C++. The arduino libraries are not particularly brilliant or well suited, so we’ll be using some of the ArduPilot libraries which are superior. However, we’ll be keeping their use to a minimum in favour of the DIY approach (which is why you’re here after all). The first and main library that we’re going to use is the ArduPilot Hardware Abstraction Layer (HAL) library. This library tries to hide some of the low level details about how you read and write to pins and some other things – the advantage is that the software can then be ported to new hardware by only changing the hardware abstraction layer. In the case of ArduPilot, there are two hardware platforms, APM and PX4, each of which have their own HAL library which allows the ArduPilot code to remain the same across both. If you later decide to run your code on the Raspberry Pi, you’ll only need to change the HAL.

The HAL library is made up from several components:

  • RCInput – for reading the RC Radio.
  • RCOutput – for controlling the motors and other outputs.
  • Scheduler – for running particular tasks at regular time intervals.
  • Console – essentially provides access to the serial port.
  • I2C, SPI – bus drivers (small circuit board networks for connecting to sensors)
  • GPIO – Generial Purpose Input/Output – allows raw access to the arduino pins, but in our case, mainly the LEDs

WHAT TO DOWNLOAD: You’ll need to download the ArduPilot version of the Arduino IDE. Also grab the libraries which should be placed in your sketches folder. Also make sure you select your board type from the Arduino menu like so:

ArduPilot Arduino IDE SetupFig 4: ArduPilot Arduino IDE SetupOur flight controller is going to have to read in the radio inputs (pilot commands), measure our current attitude (yaw/pitch/roll), change the motor speeds to orientate the quad in the desired way. So let’s start out by reading the radio.

Back to top

Reading the Radio Inputs

RC Radios have several outputs, one for each channel/stick/switch/knob. Each radio output transmits a pulse at 50Hz with the width of the pulse determining where the stick is on the RC transmitter. Typically, the pulse is between 1000us and 2000us long with a 18000us to 19000us pause before the next – so a throttle of 0 would produce a pulse of 1000us and full throttle would be 2000us. Sadly, most radios are not this precise so we normally have to measure the min/max pulse widths for each stick (which we’ll do in a minute).

Radio SignalsFig 5: Radio SignalsThe ArduPilot HAL library does the dirty work of measuring these pulse widths for us. If you were coding this yourself, you’d have to use pin interrupts and the timer to measure them – arduino’s AnalogRead isn’t suitable because it holds (blocks) the processor whilst it is measuring which stops us from doing anything else. It’s not hard to implement an interrupt measurer, it can be programmed in an hour or so but as it’s fairly mundane we won’t.

Here’s some sample code for measuring the channel ‘values’ using the APM HAL library. The channel values are just a measure in microseconds of the pulse width.

#include <AP_Common.h>
#include <AP_Math.h>
#include <AP_Param.h>
#include <AP_Progmem.h>
#include <AP_ADC.h>
#include <AP_InertialSensor.h>

#include <AP_HAL.h>
#include <AP_HAL_AVR.h>

const AP_HAL::HAL& hal = AP_HAL_AVR_APM2;  // Hardware abstraction layer

void setup()
{

}

void loop()
{
  uint16_t channels[8];  // array for raw channel values

  // Read RC channels and store in channels array
  hal.rcin->read(channels, 8);

  // Copy from channels array to something human readable - array entry 0 = input 1, etc.
  uint16_t rcthr, rcyaw, rcpit, rcroll;   // Variables to store rc input
  rcthr = channels[2];
  rcyaw = channels[3];
  rcpit = channels[1];
  rcroll = channels[0];

  hal.console->printf_P(
            PSTR("individual read THR %d YAW %d PIT %d ROLL %d\r\n"),
            rcthr, rcyaw, rcpit, rcroll);

  hal.scheduler->delay(50);  //Wait 50ms 
}

AP_HAL_MAIN();    // special macro that replace's one of Arduino's to setup the code (e.g. ensure loop() is called in a loop).

Create a new sketch and upload the code to the ardupilot hardware. Use the serial monitor and write down the minimum and maximum values for each channel (whilst moving the sticks to their extremes).

Now let’s scale the stick values so that they represent something meaningful. We’re going to use a function called map, which takes a number between one range and places it in another – e.g., if we had a value of 50, which was between 0-100, and we wanted to scale it to be between 0 and 500, the map function would return 250.

The map function (copied from Arduino library) should be pasted into your code after the #include and defines:

long map(long x, long in_min, long in_max, long out_min, long out_max)
{
  return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
}

It is used as:

result = map(VALUE, FROM_MIN, FROM_MAX, TO_MIN, TO_MAX).

It makes sense for the throttle to remain untouched, no doubt you’ve calibrated your ESCs with the existing throttle values (if you followed my advice about flying first) so let’s not play with it. Pitch and roll should be scaled to be between -45 degrees and +45 degrees, whilst yaw might scale to +-150 degrees.

My code in loop() now looks like follows after I’ve substituted in the map function with the min/max values for each stick. We’ll also change the variable types to long to support negative numbers.

long rcthr, rcyaw, rcpit, rcroll;   // Variables to store rc input
rcthr = channels[2];
rcyaw = map(channels[3], 1068, 1915, -150, 150);
rcpit = map(channels[1], 1077, 1915, -45, 45);
rcroll = map(channels[0], 1090, 1913, -45, 45);

Pitch should be negative when the stick is forward and roll/yaw should be negative when the stick is left. If this isn’t the case then reverse them until they are correct.

You should now print the new values out and monitor them on the serial monitor. Ideally, they should be zero or very close when the sticks (except thr) are centred. Play with the min/max values until they are. There will be some jitter (waving about the true value) because the sticks on your transmitter are analog but it should be of the order +-1 or +-2 degrees. Once you’ve got your quad flying, you might consider returning here to introduce an averaging filter.

Ensure that pitch forward, roll left, and yaw left are negative numbers – if they’re not, put a minus sign before the map. Also ensure that the throttle increases in value as you raise the throttle.

Back to top

Controlling the motors

Motors are controlled through the Electronic Speed Controllers (ESCs). They work on pulse widths between approximately 1000us and 2000us like the RC radio receiver – sending a pulse of 1000us typically means off, and a pulse of 2000us means fully on. The ESCs expect to receive the pulse at 50Hz normally, but most off the shelf ESCs average the last 5-10 values and then send the average to the motors. Whilst this can work on a quad, it behaves much better if we minimise the effect of this averaging filter to give near instantaneous response. Hence, the APM HAL library sends the pulse at 490Hz, meaning that the 5-10 pulses which are averaged occur very quickly largely negating the filter’s effect.

In setup(), let’s enable the outputs:

hal.rcout->set_freq(0xF, 490);
hal.rcout->enable_mask(0xFF);

After your includes, let’s define a mapping of output number to motor name – this mapping is the same as the ArduCopter uses but the numbering starts from zero rather than one.

#define MOTOR_FL   2    // Front left    
#define MOTOR_FR   0    // Front right
#define MOTOR_BL   1    // back left
#define MOTOR_BR   3    // back right

In your loop, after reading the radio inputs, let’s send the radio throttle straight to one of the motors:

hal.rcout->write(MOTOR_FR, rcthr);

You can now program your quad and try it, WITHOUT propellors. Slowly raise the throttle and the front right motor should spin up. By repeating the last line for the remaining three motors, all your motors would spin up although the quad will just crash if you have propellors on because we have to do stablisation – slight differences between the motors, props, ESCs, etc mean that slightly unequal force is applied at each motor so it’ll never remain level.

** Comment out the write line before proceeding for safety reasons **Back to top

Determining Orientation

Next, we need to determine which orientation, or attitude as it’s known, the quad copter is in. We can then use this, along with the pilot’s commands to vary the motor speed. There are two sensors used for determining orientation, accelerometers and gyroscopes. Accelerometers measure acceleration in each direction (gravity is an acceleration force so it gives us a direction to ground) and gyroscopes measure angular velocity (e.g. rotation speed around each axis); however, accelerometers are very sensitive to vibrations and aren’t particularly quick whilst gyroscopes are quick and vibration resistant but tend do drift (e.g. show constant rotation of 1/2 degrees/sec when stationary). So, we use a sensor fusion algorithm to fuse the two together and get the best of both worlds – the scope of such an algorithm is outside the scope of this article, typically a Kalman filter is used, or in the case of ArduPilot, a Direct Cosine Matrix (DCM). I’ve provided the DCM link for interest if you have a maths background – for the rest of us we don’t need to know the details.

Thankfully, the MPU6050 sensor chip containing the accelerometer and gyroscopes has a built in Digital Motion Processing unit (aka sensor fusion) that we can use. It will fuse the values together and present us with the result in quaternions. quaternions are a different way of representing orientation (as opposed to euler angles: yaw pitch roll) that has some advantages – if you’ve programmed 3d graphics you’ll already be familiar with them. To make things easier, we tend to convert quaternions into Euler angles and work with them instead.

Here’s the code to use the MPU6050 sensor with sensor fusion.

In setup():

// Disable barometer to stop it corrupting bus
hal.gpio->pinMode(40, GPIO_OUTPUT);
hal.gpio->write(40, 1);

// Initialise MPU6050 sensor
ins.init(AP_InertialSensor::COLD_START,
		 AP_InertialSensor::RATE_100HZ,
		 NULL);

// Initialise MPU6050's internal sensor fusion (aka DigitalMotionProcessing)
hal.scheduler->suspend_timer_procs();  // stop bus collisions
ins.dmp_init();
ins.push_gyro_offsets_to_dmp();
hal.scheduler->resume_timer_procs();

Now let’s read the sensor. At the beginning of loop() add this line, which will force a wait until there is new sensor data. There’s no point in changing motor speeds unless we know something new.

while (ins.num_samples_available() == 0);

** Remove the 50ms delay from the loop, no longer needed **Now let’s get the yaw/pitch/roll from the sensor and convert them from radians to degrees:

ins.update();
ins.quaternion.to_euler(&roll, &pitch, &yaw);
roll = ToDeg(roll) ;
pitch = ToDeg(pitch) ;
yaw = ToDeg(yaw) ;

Now let’s print it out to the serial console:

hal.console->printf_P(
	  PSTR("P:%4.1f  R:%4.1f Y:%4.1f\n"),
			  pitch,
			  roll,
			  yaw);

You need to put a rate throttle on this print statement, e.g. ensure it’s only printed once every 20 times around the loop (hint: use a counter). Otherwise the serial line will get flooded.

Move your copter around and ensure the right values are changing!

Back to top

Acrobatic / Rate mode control

Acrobatic/rate mode is where the sticks on your transmitter tell the quad to rotate at a particular rate (e.g. 50deg/sec), and when you return the sticks to center the quad stops rotating. This is as opposed to stablise mode where returning the sticks to center will level the quadcopter. It’s a mode that takes practice to learn how to fly in but we are required to implement this mode first because the stablise controllers operate on top of the rate controllers.

So, our aim is for each of the pilot’s sticks to dictate a rate of rotation and for the quad to try to achieve that rate of rotation. So if the pilot is saying rotate 50deg/sec forward on the pitch axis, and we’re currently not rotating, then we need to speed up the rear motors and slow down the front ones. The question is, by how much do we speed them up/slow them down? To decide this, you need to understand Proportional Integral Derivative (PID) controllers which we are going to make extensive use of. Whilst somewhat of a dark art, the principles are fairly straight forward. Let’s assume our quadcopter is not rotating on the pitch axis at the moment, so actual = 0, and let’s further assume the pilot wants the quad to rotate at 15deg/sec, so desired = 15. Now we can say that the error between what we want, and what we’ve got is:

error = desired - actual = 15 - 0 = 15

Now given our error, we multiply it by a constant, Kp, to produce the number which we will use to slow down or speed up the motors. So, we can say the motors change as follows:

frontMotors = throttle - error*Kp
rearMotors = throttle + error*Kp

As the motors speed up the quad will start to rotate, and the error will decrease, causing the difference between the back/rear motor speeds to decrease. This is desirable, as having a difference in motor speeds will accelerate the quad, and having no difference will cause it to hold level (in a perfect world). Believe it or not, this is all we really need for rate mode, to apply this principle to each of the axes (yaw, pitch, roll) and using the gyros to tell us what rate we’re rotating at (actual). The question you’re probably asking is, what should I set Kp to? Well, that’s a matter for experimentation – I’ve set some values that work well with my 450mm quadcopter – stick with these until you’ve got this coded.

Rate only PIDFig 6: Rate only PIDIf you’ve been studying PIDs before, you’ll know there are actually two other parts to a PID: integral and derivative. Integral (Ki is the tuning parameter) essentially compensates for a constant error, sometimes the Kp term might not provide enough response to get all the way if the quad is unbalanced, or there’s some wind. Derivative we’re going to ignore for now.

Let’s get started, define the following PID array and constants globally:

PID pids[6];
#define PID_PITCH_RATE 0
#define PID_ROLL_RATE 1
#define PID_PITCH_STAB 2
#define PID_ROLL_STAB 3
#define PID_YAW_RATE 4
#define PID_YAW_STAB 5

Now initialise the PIDs with sensible values (you might need to come back and adjust these later) in the setup() function.

pids[PID_PITCH_RATE].kP(0.7);
//  pids[PID_PITCH_RATE].kI(1);
pids[PID_PITCH_RATE].imax(50);

pids[PID_ROLL_RATE].kP(0.7);
//  pids[PID_ROLL_RATE].kI(1);
pids[PID_ROLL_RATE].imax(50);

pids[PID_YAW_RATE].kP(2.5);
//  pids[PID_YAW_RATE].kI(1);
pids[PID_YAW_RATE].imax(50);

pids[PID_PITCH_STAB].kP(4.5);
pids[PID_ROLL_STAB].kP(4.5);
pids[PID_YAW_STAB].kP(10);

Leave the I-terms uncommented for now until we can get it flying OK, as they may make it difficult to identify problems in the code.

Ask the gyros for rotational velocity data for each axis.

Vector3f gyro = ins.get_gyro();

Gyro data is in radians/sec, gyro.x = roll, gyro.y = pitch, gyro.z = yaw. So let’s convert these to degrees and store them:

float gyroPitch = ToDeg(gyro.y), gyroRoll = ToDeg(gyro.x), gyroYaw = ToDeg(gyro.z);

Next, we’re going to perform the ACRO stablisation. We’re only going to do this if the throttle is above the minimum point (approx 100pts above, mine is at 1170, where minimum is 1070) otherwise the propellors will spin when the throttle is zero and the quad is not-level.

if(rcthr > 1170) {   // *** MINIMUM THROTTLE TO DO CORRECTIONS MAKE THIS 20pts ABOVE YOUR MIN THR STICK ***/
	long pitch_output =   pids[PID_PITCH_RATE].get_pid(gyroPitch - rcpit, 1);
	long roll_output =   pids[PID_ROLL_RATE].get_pid(gyroRoll - rcroll, 1);
	long yaw_output =   pids[PID_YAW_RATE].get_pid(gyroYaw - rcyaw, 1);

	hal.rcout->write(MOTOR_FL, rcthr - roll_output - pitch_output);
	hal.rcout->write(MOTOR_BL, rcthr - roll_output + pitch_output);
	hal.rcout->write(MOTOR_FR, rcthr + roll_output - pitch_output);
	hal.rcout->write(MOTOR_BR, rcthr + roll_output + pitch_output);
} else {  // MOTORS OFF
	hal.rcout->write(MOTOR_FL, 1000);
	hal.rcout->write(MOTOR_BL, 1000);
	hal.rcout->write(MOTOR_FR, 1000);
	hal.rcout->write(MOTOR_BR, 1000);

	for(int i=0; i<6; i++) // reset PID integrals whilst on the ground
		pids[i].reset_I();
}

Now raise the throttle about 20% and rotate your quad forward/back, and left/right in your hands and make sure the correct propellors speed up/slow down – if the quad is tilted forward, then the forward propellors should speed up and the rears slow down. If not, change the signs around on the motor outputs (e.g. if the pitch is wrong, swap the signs before the pitch, likewise with the roll).

You can test this fully if you choose, and tune your rate PIDs by fixing the quad on one axis with a piece of string and testing each of the axes in turn. It’s a useful experience to get a better understanding of how the rate PID is working but not strictly necessary. Here’s an example of mine with rate only PIDs – I command it to rotate at 50deg/second:

Video: Rate PIDs only with quad fixed on one axis

Now we need to add yaw support in. As you know, two motors spin in different directions to give us yaw control. So we need to speed up / slow down the two pairs to keep our yaw constant.

hal.rcout->write(MOTOR_FL, rcthr - roll_output - pitch_output - yaw_output);
hal.rcout->write(MOTOR_BL, rcthr - roll_output + pitch_output + yaw_output);
hal.rcout->write(MOTOR_FR, rcthr + roll_output - pitch_output + yaw_output);
hal.rcout->write(MOTOR_BR, rcthr + roll_output + pitch_output - yaw_output);

This is a bit more difficult to test. You need to raise the throttle so that it hovers a little. If the yaw signs are wrong then the quad will spin.

You should now be able to get your quad off the ground for a few seconds. If you’re comfortable flying acro mode you will even be able to fly it – although bear in mind that this is pure acro mode, not that on ArduCopter where it performs auto-levelling for you.

If your quad flies floppy, or oscillates, then you need to adjust your rate Kp. Up if floppy or down if oscillating. If it’s just going nuts, then you have the signs around the wrong way – try printing out PID outputs, and motor commands to debug whilst moving the quad around (without the battery connected).

Back to top

Stablilised Control

Stabilised mode works similar to rate mode, except our code sits on top of the rate code as follows:

Cascaded PID structureFig 7: Cascaded PID structureNow, the pilot’s sticks dictate the angle that the quad should hold, not the rotational rate. So we can say, if the pilot’s sticks are centred, and the quad is currently pitched at 20 degrees, then:

error = desiredAngle - actualAngle = 0 - 20 = -20

Now in this case, we’re going to multiply error by a Kp such that the output is the angular rate to achieve. You’ll notice from earlier, Kp for the stab controllers is set at 4.5. So, if we have an error of -20, then the output from the pid is -20*4.5 = -90 (the negative just indicates direction). This means the quad should try to achieve a rate of -90degrees per second to return it to level – we then just feed this into the rate controllers from earlier. As the quad starts to level, the error will decrease, the outputted target rate will decrease and so the quadcopter will initially return to level quickly and then slow down as it reaches level – this is what we want!

// our new stab pids
float pitch_stab_output = constrain(pids[PID_PITCH_STAB].get_pid((float)rcpit - pitch, 1), -250, 250);
float roll_stab_output = constrain(pids[PID_ROLL_STAB].get_pid((float)rcroll - roll, 1), -250, 250);
float yaw_stab_output = constrain(pids[PID_YAW_STAB].get_pid((float)rcyaw - yaw, 1), -360, 360);

// rate pids from earlier
long pitch_output =  (long) constrain(pids[PID_PITCH_RATE].get_pid(pitch_stab_output - gyroPitch, 1), - 500, 500);
long roll_output =  (long) constrain(pids[PID_ROLL_RATE].get_pid(roll_stab_output - gyroRoll, 1), -500, 500);
long yaw_output =  (long) constrain(pids[PID_YAW_RATE].get_pid(yaw_stab_output - gyroYaw, 1), -500, 500);

Now your quad should be able to hover, although it might be wobbly / oscillating. So, if it’s not flying too great – now is the time to tune those PIDs, concentrating mainly on the rate ones (Kp in particular) – the stab ones _should_ be okay. Also turn on the rate I terms, and set them to ~1.0 for pitch/roll and nothing for yaw.

Notice that yaw isn’t behaving as expected, the yaw is locked to your yaw stick – so when your yaw stick goes left 45degrees the quad rotates 45 degrees, when you return your stick to centre, the quad returns its yaw. This is how we’ve coded it at present, we could remove the yaw stablise controller and just let the yaw stick control yaw rate – but whilst it will work the yaw may drift and won’t return to normal if a gust of wind catches the quad. So, when the pilot uses the yaw stick we feed this directly into the rate controller, when he lets go, we use the stab controller to lock the yaw where he left it.

As the yaw value goes from -180 to +180, we need a macro that will perform a wrap around when the yaw reaches -181, or +181. So define this near the top of your code:

#define wrap_180(x) (x < -180 ? x+360 : (x > 180 ? x - 360: x))

If you examine it carefully, if x is < -180, it adds +360, if it’s > 180 then we add -360, otherwise we leave it alone.

Define this global or static variable:

float yaw_target = 0;

Now in the main loop, we need to feed the yaw stick to the rate controller if the pilot is using it, otherwise we use the stab controller to lock the yaw.

float yaw_stab_output = constrain(pids[PID_YAW_STAB].get_pid(wrap_180(yaw_target - yaw), 1), -360, 360);

if(abs(rcyaw) > 5) {  // if pilot commanding yaw
	yaw_stab_output = rcyaw;  // feed to rate controller (overwriting stab controller output)
	yaw_target = yaw;         // update yaw target
}

You’ll also what to set your yaw target to be the direction that the quad is on the ground / throttle off – you can do this in the else part of the if.

That’s it, now yaw should behave normally. Although if you pay attention, you might notice that the yaw drifts slowly over several tens of seconds. This may not bother you, the reason it is happening is because although your yaw stick is centred, the radio jitter means the quad doesn’t always receive 0 – it hovers around that value causing the yaw to change. Additionally, the MPU6050’s yaw sensor drifts over time (1-2deg/sec) – you’ll need to use the compass to compensate for this drift (if you really care enough to fix it – most people don’t notice).

Back to top

Final Product – video and full code

Congratulations – you’ve built your first flight controller for a multi-copter! You’ll notice it’s a lot more aggresive than the standard ArduCopter code – this is because ArduCopter has a lot of processing on pilot’s inputs to make it easier to fly. Raise your throttle to ~80% and your quad will _rocket_ into the sky far faster than you could achieve on ArduCopter. Be warned – don’t raise your throttle too close to 100% as that won’t leave any room for the controller to change the motor speeds to get it level and it’ll flip (you can implement an automatic throttle lowerer fairly easily).

Download the Completed Code – this should run straight away if your regular arducopter flies (after you’ve adjusted the radio max/mins) – you might also need to adjust the PIDs to get it stable.

Video of the final product in action.

Back to top

Other ideas: Safety

  • add a mechanism to arm/disarm the quadcopter.
  • Ensure you’ve thought about what happens when there are bugs in your code – you don’t want the throttle getting stuck on full! Investigate the watchdog timer.

Back to top

Optional: Raspberry Pi

Your best bet here is to use the ArduPilot hardware as a sensor/control expansion board by connecting it to the Pi over the USB. You need to be very careful because the Pi runs Linux and as a result of this it is very difficult to do finely grained timing like controlling ESCs/reading radios. I learnt a hard lesson after choosing to do the low level control loop (PIDs) on the Pi – trying to be clever I decided to put a log write in the middle of the loop for debugging – the quad initially flied fine but then Linux decided to take 2seconds to write one log entry and the quad almost crashed into my car! Therefore, your best bet is to offload the time critical stuff to the ardupilot hardware and then run highlevel control on the Pi (e.g. navigation). You’re then free to use a language like Python because millisecond precision isn’t needed. The example I will give here is exactly that scenario.

Raspberry Pi Quad DiagramFig 8: Raspberry Pi Quad DiagramConnect the ArduPilot to your Raspberry Pi over USB and modify the code in this article to accept THR, YAW, PIT, ROL over the serial port (sample provided below). You can then set your raspberry Pi up as a wifi access point and send your stick inputs over wireless from your phone (beware that Wifi has very short range ~30m).

Sample code

Android App: : Download app – sends thr, yaw, pitch, roll from pilot out on UDP port 7000 to 192.168.0.254 – you can change this in the app

Raspberry Pi: Download server – On the Pi, we run a python script that listens for the control packets from the Android app, and then sends them to the ArduPilot. Here I’m just implementing a simple relay, but you could easily do something more complex like navigation, control over 3G, etc.

ArduPilot: Download code – accepts thr, yaw, pitch and roll over the serial port rather than over the RC radio. A simple checksum is used to discard bad packets.

Video:

Back to top

Optional: Autonomous Flight

Autonomous flight should now be fairly straightforward to implement. Some tips:

GPS Navigation: The ArduPilot provides libraries for parsing GPS data into latitude and longitude, you’ll just need a PID to convert desired speed into pitch/roll, and another PID for converting distance to waypoint into desired speed. You can use the compass to work out direction to your waypoint, and then just translate that into the right amount of pitch and yaw.

Altitude Hold: You can sense altitude with the barometer that is build onto the ArduPilot board. You’ll need two PIDs, one to calculated throttle alterations from desired ascent/descent rate and a second to calculate desired ascent/descent from distance to desired altitude.

Back to top

Some images borrowed from the ArduPilot project which are covered by licence – please see their web site for details. The code on this page, along with the ArduPilot libraries, are available as is, without warranty, under the GNU General Public Licence.

GETTING STARTED WITH METEOR.JS

$
0
0
Below is a guest post from Ben Strahan, a meteor.js club member. He put together a great post and I really wanted to share it with the whole Meteor.js Club!

How do I become a web app developer – Meteor style

What does an aspiring web developer need to know to develop a Meteor app? Below is a list of languages, frameworks, libraries, packages & more ;) .

The lists that follow are purposely ordered, unless noted. This article does not explain why you need to learn each item (that is up to you to figure out). Instead this article’s purpose is to provide a quick roadmap or “thousand mile” view of the technologies a Meteor Dev works with daily.

When you are in the weeds of learning new things it feels good knowing you have a map to reference and measure your progress against.

nemo's escape

Languages, Libraries & Frameworks, oh my!

Ultimately you need to be able to understand Meteor’s API. Getting a grasp of the technologies listed below will give you what you need. There is no need to become an expert yet but you need to understand the structure and terminology of each.

Don’t know what an API is? Check out this dude’s video

Required

  1. Javascript – JS first?! Yes soldier, don’t question me again or I will karate chop you!
  2. Shell (Terminal)
  3. HTML & CSS
  4. JSON
  5. MongoDB
  6. Handlebars
  7. Git & GitHub
  8. jQuery
  9. LESS, SASS, and/or Stylus
  10. Underscore and/or Lo-Dash
  11. Bootstrap

Optional (learn when needed)

  1. NodeJS
  2. Cordova
  3. ElasticSearch
  4. Ionic – Meteor Package Meteoric

MeteorJS

Now that you know the above you are deemed worthy to tap into the powerand awesomeness of Meteor!

meteor powered lawnmower

Why did you need to learn ALL that stuff above before touching Meteor? Because Meteor is considered a Full-Stack platform. Through Meteor you manage the front-end, back-end and all the other ends.

…Okay no more question, lets learn MORE!

Time to become a Meteor nerd, review the docs.

If the sub-projects look intimidating don’t worry. At a minimum below are the key packages in the sub-projects you need to know.

  1. Blaze
  2. Spacebars
  3. Tracker
  4. Utilities

Good Meteor Tutorials & Courses

Ordered by difficulty & depth. These tutorials, courses, books & videos will walk you through various Meteor projects. Everything you learned above will culminate.

  1. Meteor’s official tutorial (FREE)
  2. Your First Meteor Application by David Turnbull (FREE)
  3. Meteor Walkthrough Videos by George McKnight (FREE)
  4. Meteor Cookbook by Abigail Watson
  5. Discover Meteor by Sacha Greif & Tom Coleman ($ to $$)
  6. Meteor in Action by Manuel Schoebel & Stephan Hochhaus ($)
  7. 8 Days of Meteor by Josh Owens ($)
  8. Meteor Testing by Sam Hatoum ($)
  9. Meteor Club Master Bootcamp by Josh Owens ($$$)
  10. Meteor Club Testing Bootcamp by Josh Owens & Sam Hatoum ($$$)
  11. Bulletproof Meteor by Arunoda Susiripala (FREE to $$)
  12. Advance courses at Evented Mind by Chris Mather ($$)

Meteor Packages (no order)

Yes there is even more to learn. Meteor has a package manager calledAtmosphere which allows the community to build packages that deeply integrate into the Meteor platform and expands the APIs available to you, the developer. Below is a list of the standard packages you will find in almost every serious Meteor app so you should get to know them.

Package Name GitHub Atmosphere Website
accounts-password github atmosphere website
useraccounts:core github atmosphere website
reactive-var atmosphere website
reactive-dict atmosphere
iron:router github atmosphere guide website
zimme:iron-router-active github atmosphere
zimme:iron-router-auth github atmosphere
manuelschoebel:ms-seo github atmosphere article
dburles:collection-helpers github atmosphere
matb33:collection-hooks github atmosphere
reywood:publish-composite github atmosphere website
ongoworks:security github atmosphere
alanning:roles github atmosphere website
aldeed:autoform github atmosphere
aldeed:collection2 github atmosphere
aldeed:simple-schema github atmosphere
momentjs:moment github atmosphere website
matteodem:easy-search github atmosphere website
matteodem:server-session github atmosphere
meteorhacks:kadira github atmosphere website
meteorhacks:aggregate github atmosphere
meteorhacks:fast-render github atmosphere website
meteorhacks:subs-manager github atmosphere
meteorhacks:unblock github atmosphere
raix:handlebar-helpers github atmosphere
yogiben:helpers github atmosphere
zimme:collection-softremovable github atmosphere
zimme:collection-timestampable github atmosphere
u2622:persistent-session github atmosphere
tmeasday:publish-counts github atmosphere
percolatestudio:synced-cron github atmosphere
dburles:factory github atmosphere
anti:fake github atmosphere

The rabbit hole goes deeper…

Wow, you must really be committed if you got this far. Ok, so you want my super secret lists?

Service Providers

When you go to deploy your app online there are a huge amount of service providers available to a developer. Below are a few that specifically serve the Meteor community (and do a great job) so I decided to give them a shout.

  • Kadira – Performance Tracking
  • Modulus – Hosting (Use code ‘Metpodcast’ to get a $25 credit)
  • Compose – Mongo Database Hosting with Oplog

Blogs, Vlogs, News & more (no order)

Come drink the Meteor cool-aid with me… look we won’t be alone.

If I forgot someone let me (@_benstr) or @joshowens know

Other articles like this one

Join the Meteor Club

and get my best Meteor.js tips and tricks in your inbox!

Email Address
Full name
Are you an advanced Meteor.js developer?
  • Yes
  • No

Josh Owens

It all started with an Atari 800XL, but now Josh is a ruby and javascript developer with 10 years of professional experience. His current love is Meteor.js, which he works with daily.
Cincinnati, Ohio

Creating a Mobile Application with Reapp

$
0
0

Creating a Mobile Application with Reapp

Jay Raj
Tweet
React is a JavaScript library focused on building user interfaces. It’s increase in popularity has been helped in part due to to fact that it’s created, used and maintained by Facebook.

Why React ?

React works on the concept of the “virtual DOM” which makes it different to other JS libraries. When a change occurs it updates the virtual DOM instead of updating the actual DOM. When there are a couple of changes in the virtual DOM, it makes a single update thus avoiding frequent updates to DOM.

From the official site,

React abstracts away the DOM from you, giving a simpler programming model and better performance. React can also render on the server using Node, and it can power native apps using React Native.

Introducing Reapp.io

Reapp is a platform to create mobile apps. It provides a UI kit of components, optimized and fully customizable for creating mobile apps.

Reapp Demo

What we’ll create

In this tutorial, we’ll learn how to create a mobile app using Reapp. We’ll call this app, the “Where was I” app. The app will help a user save different locations. We’ll make use of the Google Maps API to enable users to select locations. We’ll use Firebase as the back end to save data.

Source code for this tutorial is available on GitHub.

Get Started

We’ll start by installing reapp and create a project called ReactApp.

1
2
npm install -g reapp
reapp new ReactApp

Open the project directory, run reapp and the app should be running at http://localhost:3010.

1
cd ReactApp && reapp run

Here is the resulting project structure.

ReactApp Project Structure

Inside the project directory is the app folder which contains the app.js file. Different routes for the application are defined in the app.js file. The components folder contains the different components that would be rendered when a particular route is requested.

Creating Views

Start by removing the sub.jsx file from the components/home folder. Open home.jsx and remove the existing code Let’s start from scratch and try to understand how things work. We’ll create a react class called Home to render our component.

1
2
3
4
5
6
7
8
9
10
11
import { Reapp, React, View} from 'reapp-kit';
var Home = React.createClass({
  render: function() {
    return (
      <h2>Welcome to Reapp!!</h2>
    );
  }
});
export default Reapp(Home);

As seen above, the render function returns the view to be displayed. Update the routes in the app.js file.

1
2
3
4
5
6
import './theme';
import { router, route } from 'reapp-kit';
router(require,
  route('home', '/')
);

Save changes and restart the server. Open http://localhost:3010 in your browser and you should see the default view. I would recommend enabling device emulation in chrome developer tools to view the app as a mobile app.

enter image description here

Next we’ll integrate Google Maps into our view. Add a header for the app by modifying home.jsx to return a view inside the render function.

1
2
3
<View title="Where Am I">
</View>

Next create a new map component to display Google maps. Start by adding the google maps API reference in the assets/web/index.html page.

In home.jsx create a new React component which will display the Google Map.

1
2
3
4
5
6
7
8
var MapView = React.createClass({
    render: function() {
        return (
            <div id="map"><span>Map Would be Here !!</span></div>
        );
    }
});

Add the MapView component to the home view.

1
2
3
4
5
<View title="Where Am I">
    <MapView />
    <MapView />
</View>

Add the following style to assets/web/index.html.

1
2
3
4
5
6
7
8
<style>
    #map {
        width: 100%;
        height: 400px;
        margin: 0px;
        padding: 0px;
    }
</style>

Save changes and restart the server. You should see the text Map Would be here !! on your app screen.

Adding Google Maps

We have seen how to nest react components. Next we’ll remove the span inside the MapView render function and replace it with the actual map. Once the component has been mounted we’ll create the Google Map and render it in the #map div.

We’ll write our Google Maps code in the componentWillMount lifecycle method. Inside the MapViewcomponent add the componentWillMount method.

1
2
3
componentDidMount: function() {
    // Code will be here
},

Inside componentDidMount define a default map location, map options and create the map.

1
2
3
4
5
6
7
8
9
10
11
var sitepoint = new google.maps.LatLng(-37.805723, 144.985360);
var mapOptions = {
        zoom: 3,
        center: sitepoint
    },
    map = new google.maps.Map(React.findDOMNode(this), mapOptions);
   this.setState({
       map: map
   });

In the code above React.findDomNode gets a reference to the component’s DOM node element andsetState triggers UI updates. Save changes and restart the server. If all is well, you should be able to view the map.

enter image description here

Let’s add a marker to our Google Maps. We’ll set several options to our marker such as animation anddraggable.

1
2
3
4
5
6
marker = new google.maps.Marker({
     map:map,
     draggable:true,
     animation: google.maps.Animation.DROP,
     position: sitepoint
});

Here is the full MapView component:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
var MapView = React.createClass({
    componentDidMount: function() {
      var sitepoint = new google.maps.LatLng(-37.805723, 144.985360);
      var mapOptions = {
              zoom: 3,
              center: sitepoint
          },
          map = new google.maps.Map(React.findDOMNode(this), mapOptions),
          marker = new google.maps.Marker({
           map:map,
           draggable:true,
           animation: google.maps.Animation.DROP,
           position: sitepoint
      });
      this.setState({
        map: map
      });
    },
    render: function() {
        return (
            <div id="map"><span>Map Would be Here !!</span></div>
        );
    }
});

Save changes, restart the server and you should have the map with a marker.

enter image description here

Adding Position Info

When the user drags the marker the position info should show. To implement this, add the required HTML in the Home component. Modify the render function code to look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
render: function() {
    return (
      <View title="Where Am I">
        <MapView />
        <div style={{width:100 + '%',height:100 + 'px',margin: 0 + ' auto',padding:10 + 'px'}} id="infoPanel">
            <div>
              <span><b>Position:</b></span>
              <span  id="info"></span>
            </div>
            &nbsp;
            <div>
              <span><b>Address:</b></span>
              <span  id="address"></span>
            </div>
        </div>
      </View>
    );
  }

Set the default position and address. For the position, as the default latitude and longitude is hard-coded, set the info value as shown:

1
document.getElementById('info').innerHTML = '-37.805723, 144.985360';

To display the address we’ll make use of Google Maps Geocoder.

1
2
3
4
5
6
7
geocoder.geocode({
    latLng: marker.getPosition()
}, function(responses) {
    if (responses && responses.length > 0) {
        document.getElementById('address').innerHTML = responses[0].formatted_address;
    }
});

Here is the modified MapView component:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
var MapView = React.createClass({
    componentDidMount: function() {
      var geocoder = new google.maps.Geocoder();
      var sitepoint = new google.maps.LatLng(-37.805723, 144.985360);
      document.getElementById('info').innerHTML = '-37.805723, 144.985360';
      var mapOptions = {
              zoom: 3,
              center: sitepoint
          },
          map = new google.maps.Map(React.findDOMNode(this), mapOptions),
          marker = new google.maps.Marker({
           map:map,
           draggable:true,
           animation: google.maps.Animation.DROP,
           position: sitepoint
      });
      geocoder.geocode({
        latLng: marker.getPosition()
      }, function(responses) {
        if (responses && responses.length > 0) {
            document.getElementById('address').innerHTML = responses[0].formatted_address;
        }
      });
      this.setState({
        map: map
      });
    },
    render: function() {
        return (
            <div id="map"><span>Map Would be Here !!</span></div>
        );
    }
});

Save changes, restart the server and you should have the default position and address displayed in the app.

enter image description here

Let’s add a dragend event listener to update the position and address once the marker is dragged. Inside the dragend callback function the marker position and address will be fetched and the address andinfo elementsupdated with the values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
google.maps.event.addListener(marker, 'dragend', function(e) {
    var obj = marker.getPosition();
    document.getElementById('info').innerHTML = e.latLng;
    map.panTo(marker.getPosition());
    geocoder.geocode({
        latLng: obj
    }, function(responses) {
        if (responses && responses.length > 0) {
            document.getElementById('address').innerHTML = responses[0].formatted_address;
        }
    });
});

Save changes and restart the server. Now if the marker is dragged, the info is updated when dragging ends.

Save Information in Firebase

Let’s add a button to save the coordinates in Firebase. First install reapp-ui into the project.

1
npm install reapp-ui@0.12.47

Import the button component into Home.jsx.

1
import Button from 'reapp-ui/components/Button';

Add the button to the Home component.

1
<Button onTap={this.savePosition}>Save </Button>

On tapping the Save button a function will save the coordinates to Firebase. Register for a free account on Firebase to use it in this app. Once registered you should have a Firebase URL to start working. Here is my Firebase URL:

1
https://blistering-heat-2473.firebaseio.com

Login to your firebase account and click on the plus icon on the Firebase URL displayed in your dashboard to create a URL such as:

1
https://blistering-heat-2473.firebaseio.com/Position

Use the above URL to save the location information.

Include a reference to Firebase in the assets/web/index.html page.

Next, define the savePosition function in the Home component which will be called on tapping the save button.

1
2
3
4
5
6
7
8
9
10
savePosition: function() {
    var wishRef = new Firebase('https://blistering-heat-2473.firebaseio.com/Position');
    var pos = document.getElementById('info').innerHTML;
    var address = document.getElementById('address').innerHTML;
    wishRef.push({
        'Position': pos,
        'Address': address
    });
},

As seen above, we created a Firebase object using the Firebase URL and pushed the data to Firebase using the push API function.

Save changes and restart the server. Locate a position on the map and click save. Check firebase and the data should be saved.

Let’s add an alert to notify the user that the data has been saved. We’ll make use of the modal component, so import modal into Home.jsx.

1
import Modal from 'reapp-ui/components/Modal';

Inside the Home View component’s render function, add the following modal code above

1
2
3
4
5
6
{this.state.modal &&
          <Modal
            title="Coordinates Saved."
            onClose={() => this.setState({ modal: false })}>
          </Modal>
        }

This will be visible when the state.modal is true. Initialize state.modal to false when the app loads. For that make use of the getInitialState method. Inside the Home component define thegetInitialState.

1
2
3
4
5
getInitialState: function() {
    return {
      modal: false
    };
  }

Inside the savePosition method after pushing the data to firebase, set the state.modal to true to show the modal.

1
this.setState({ modal: true });

Save changes and restart the server. Once the app has loaded, click on the Save button to save the data and you should be able to see the modal pop up.

enter image description here

Conclusion

In this tutorial, we saw how to get started with creating a mobile app using ReactJS, Reapp and Firebase. We created an app to save the map coordinates selected in the Google Maps to Firebase.

I hope this tutorial serves as starting point for creating mobile apps using ReactJS. Let me know your thoughts regarding React and Reapp and how you think they compare to existing JavaScript frameworks.


Refactoring React Components to ES6 Classes

$
0
0

Here at NMC, we’re big fans of the React library for building user interfaces in JavaScript. We’ve also been experimenting with the next version of JavaScript, ES6, and were excited to see the latest version of React promote ES6 functionality. Starting with React 0.13, defining components using ES6 classes is encouraged.

Refactoring a React 0.12 component defined using `createClass` to an 0.13 and beyond class only requires a few straightforward refactoring steps. In this blog post, we’ll walk through them one-by-one.

Step 1 – Extract `propTypes` and `getDefaultTypes` to properties on the component constructor

Unlike object literals, which the `createClass` API expected, class definitions in ES6 only allow you to define methods and not properties. The committee’s rationale for this was primarily to have a minimal starting point for classes which could be easily agreed upon and expanded in ES7. So for class properties, like `propTypes`, we must define them outside of the class definition.

Another change in React’s 0.13 release is that `props` are required to be immutable. This being the case, `getDefaultProps` no longer makes sense as a function and should be refactored out to a property on the constructor, as well.

Before:

var ExampleComponent = React.createClass({
 propTypes: {
  aStringProp: React.PropTypes.string
 },
 getDefaultProps: function() {
  return { aStringProp: '' };
 }
});

After:

var ExampleComponent = React.createClass({ ... });
ExampleComponent.propTypes = {
 aStringProp: React.PropTypes.string
};
ExampleComponent.defaultProps = {
 aStringProp: ''
};

Step 2 – Convert component from using `createClass` to being an ES6 Class

ES6 class bodies are more terse than traditional object literals. Methods do not require a `function` keyword and no commas are needed to separate them. This refactoring looks as such:

Before:

var ExampleComponent = React.createClass({
 render: function() {
  return <div onClick={this._handleClick}>Hello, world.</div>;
 },
 _handleClick: function() {
  console.log(this);
 }
});

After:

class ExampleComponent extends React.Component {
 render() {
  return <div onClick={this._handleClick}>Hello, world.</div>;
 }
 _handleClick() {
  console.log(this);
 }
}

Step 3 – Bind instance methods / callbacks to the instance

One of the niceties provided by React’s `createClass` functionality was that it automatically bound your methods to a component instance. For example, this meant that within a click callback `this` would be bound to the component. With the move to ES6 classes, we must handle this binding ourselves. The React team recommends prebinding in the constructor. This is a stopgap until ES7 allows property initializers.

Before:

class ExampleComponent extends React.Component {
 render() {
  return <div onClick={this._handleClick}>Hello, world.</div>;
 }
 _handleClick() {
  console.log(this); // this is undefined
 }
}

After:

class ExampleComponent extends React.Component {
 constructor() {
  super();
  this. _handleClick = this. _handleClick.bind(this);
 }
 render() {
  return <div onClick={this._handleClick}>Hello, world.</div>;
 }
 _handleClick() {
  console.log(this); // this is an ExampleComponent
 }
}

As a bonus step, at the end of this post we’ll look at introducing our own Component superclass that tidies up this autobinding.

Step 4 – Move state initialization into the constructor

The React team decided a more idiomatic way of initializing state was simply to store it in an instance variable setup in the constructor. This means you can refactor away your `getInitialState` method by moving its return value to be assigned to the `this.state` instance variable in your class’ constructor.

Before:

class ExampleComponent extends React.Component {
 getInitialState() {
  return Store.getState();
 }
 constructor() {
  super();
  this. _handleClick = this. _handleClick.bind(this);
 }
 // ...
}

After:

class ExampleComponent extends React.Component {
 constructor() {
  super();
  this. _handleClick = this. _handleClick.bind(this);
  this.state = Store.getState();
 }
 // ...
}

Conclusion

The handful of refactoring steps needed to convert an existing component to an ES6 class / React 0.13 and beyond component is pretty straightforward. While `React.createClass` is not deprecated, and will not be until JavaScript has a story for mixins, there is a strong consensus that working in the direction the language is heading is wise.

As a closing thought, consider one additional refactoring that introduces your project’s own base Component class to hold niceties that are reused through your own Component library.

Bonus Step – Refactor to a base component

Before:

class ExampleComponent extends React.Component {
 constructor() {
  super();
  this. _handleClick = this. _handleClick.bind(this);
  this. _handleFoo = this. _handleFoo.bind(this);
 }
 // ...
}

After:

class BaseComponent extends React.Component {
 _bind(...methods) {
  methods.forEach( (method) => this[method] = this[method].bind(this) );
 }
}

class ExampleComponent extends BaseComponent {
 constructor() {
  super();
  this._bind('_handleClick', '_handleFoo');
 }
 // ...
}

Notice how we’ve reduced the tedium of binding multiple instance methods to `this` by writing a `_bind` helper method in our `BaseComponent`. The `_bind` method uses a couple of awesome ES6 features: `methods` is a rest parameter, and there’s an arrow function in the `forEach`. If you’re unfamiliar with these features of ES6, I’ll leave them as cliffhangers for you to explore further. Happy trails.


推荐 11 款 React Native 开源移动 UI 组件

$
0
0

本文推荐 11 个非常棒的 React Native 开源组件,希望能给移动应用开发者提供帮助。

React Native 是近期 Facebook 基于 MIT 协议开源的原生移动应用开发框架,已经用于 Facebook 的生产环境。React Native 可以使用最近非常流行的 React.js 库来开发 iOS 和 Android 原生 APP。

1. iOS 表单处理控件 tcomb-form-native

tcomb-form-native 是 React Native 强大的表单处理控件,支持 JSON 模式,可插拔的外观和感觉。在线演示:http://react.rocks/example/tcomb-form-native

2. 摄像机视图 react-native-camera

react-native-camera 是 React Native 的摄像头 viewport。这个模块应用于开发的早期阶段,它支持摄像头的转换和基本图片捕捉。

使用示例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
var React = require('react-native');
var {
  AppRegistry,
  StyleSheet,
  Text,
  View,
} = React;
var Camera = require('react-native-camera');
var cameraApp = React.createClass({
  render: function() {
    return (
      <View>
        <TouchableHighlight onPress={this._switchCamera}>
          <View>
            <Camera
              ref="cam"
              aspect="Stretch"
              orientation="PortraitUpsideDown"
              style={{height: 200, width: 200}}
            />
          </View>
        </TouchableHighlight>
      </View>
    );
  },
  _switchCamera: function() {
    this.refs.cam.switch();
  }
});
AppRegistry.registerComponent('cameraApp', () => cameraApp);

3. react-native-video

react-native-video 是 <Video> 标签控件。

示例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// Within your render function, assuming you have a file called
// "background.mp4" in your project
<Video source={"background"} style={styles.backgroundVideo} repeat={true} />
// Later on in your styles..
var styles = Stylesheet.create({
  backgroundVideo: {
    resizeMode: 'cover'// stretch and contain also supported
    position: 'absolute',
    top: 0,
    left: 0,
    bottom: 0,
    right: 0,
  },
});

4. 导航控件 react-native-navbar

react-native-navbar 是用于 React Native 上简单的定制化导航栏。

示例代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
var NavigationBar = require('react-native-navbar');
var ExampleProject = React.createClass({  
  renderScene: function(route, navigator) {    
    var Component = route.component;    
    var navBar = route.navigationBar;    
    if (navBar) {
      navBar = React.addons.cloneWithProps(navBar, {navigator: navigator,
        route: route
      });
    }    return (<View style={styles.navigator}>
        {navBar}<Component navigator={navigator} route={route} />
      </View>
    );
  },  render: function() {return (<Navigator
        style={styles.navigator}
        renderScene={this.renderScene}
        initialRoute={{
          component: InitialView,
          navigationBar: <NavigationBar title="Initial View"/>
        }}
      />
    );
  }
});

5. React Native 轮播控件 react-native-carousel

react-native-carousel 是一个简单的 React Native 轮播控件。

示例代码:

1
2
3
4
5
6
7
8
9
10
11
var Carousel = require('react-native-carousel');var ExampleProject = React.createClass({
  render() {    
   return (      
        <Carousel width={375} indicatorColor="#ffffff" inactiveIndicatorColor="#999999">
        <MyFirstPage />
        <MySecondPage />
        <MyThirdPage />
      </Carousel>
    );
  }
});

6. 下拉刷新组件 react-native-refreshable-listview

react-native-refreshable-listview 是下拉刷新 ListView,当数据重载的时候显示加载提示。

React Native Hacker News

7. Modal 组件 react-native-modal

react-native-modal 是 React Native 的 <Modal> 组件。

8. 文本解析控件 react-native-htmltext

react-native-htmltext 可以用 HTML 像 markup 一样在 ReactNative 里创建出相应效果的样式文本。ReactNative 为那些样式文本提供一个文本元素,用于取代 NSAttributedString,你可以创建嵌套的文本:

1
2
3
4
<Text style={{fontWeight: 'bold'}}>
  I am bold 
  <Text style={{color: 'red'}}> and red </Text>
</Text

9. react-native-htmlview

react-native-htmlview 是一个将 HTML 目录作为本地视图的控件,其风格可以定制。

10. LinearGradient 组件 react-native-linear-gradient

react-native-linear-gradient 是一个 React Native 的 LinearGradient 组件。

11. 双向循环播放 react-native-looped-carousel

react-native-looped-carousel 是基于 React Native 的双向循环播放控件。

示例代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
'use strict';var React = require('react-native');var Carousel = require('react-native-looped-carousel');var Dimensions = require('Dimensions');var {width, height} = Dimensions.get('window');var {
  AppRegistry,
  StyleSheet,  Text,
  View
} = React;var carouselTest = React.createClass({  render: function() {    return (      <Carousel delay={500}>
          <View style={{backgroundColor:'#BADA55',width:width,height:height}}/>
          <View style={{backgroundColor:'red',width:width,height:height}}/>
          <View style={{backgroundColor:'blue',width:width,height:height}}/>
      </Carousel>
    );
  }
});
AppRegistry.registerComponent('carouselTest', () => carouselTest);

如果你知道其他 React Native 插件,在评论与大家分享一下吧~


Viewing all 764 articles
Browse latest View live