Quantcast
Channel: 懒得折腾
Viewing all 764 articles
Browse latest View live

部署React+Redux Web App

$
0
0

部署React+Redux Web App

March 09, 2016

前段时间使用React+Redux做了个后台管理的项目,在React初体验中分享了下入门经验。这篇文章谈谈我的部署实践。

目标

怎样才是好的部署呢?我觉至少有以下2点:

  • 性能优化:包括代码执行速度、页面载入时间
  • 自动化:重复的事情尽量让机器完成,最好能运行一条命令就完成部署

代码层面

首先从代码层面来分析。

使用React+Redux,往往会用到其强大的调试工具Redux DevTools。在手动配置DevTools时需要围绕Store、Component进行一些配置,然而这些都是用来方便调试的,生产环境下我们不希望加入这些东西,所以建议就是从代码上隔离development和production环境:

containers/
    Root.js
    Root.dev.js
    Root.prod.js
    ...
store/
    index.js
    store.dev.js
    store.prod.js

同时采用单独的入口文件(比如上面的containers/Root.js)按需加载不同环境的代码:

if (process.env.NODE_ENV === 'production') {
    module.exports = require('./Root.prod');
} else {
    module.exports = require('./Root.dev');
}

有一个细节需要注意:ES6语法不支持在if中写import语句,所以这里采用了CommonJS的模块引入方法require

具体可以看看Redux的Real World示例项目。

代码层面还需要注意的一点就是按需import,否则可能会在打包时生成不必要的代码。

OK,我们现在用webpack打个包,webpack --config webpack.config.prod.js --progress,结果可能会让你下一跳:8.4 M!求心理阴影面积…

使用webpack打包

接下来我们来调教下打包工具。目前React主流打包工具有2种:webpackBrowserify。Browserify没用过,这里主要谈谈webpack的配置经验。

同上,建议为不同的环境准备不同的webpack配置文件,比如:webpack.config.dev.jswebpack.config.prod.js。下面我们来看看几个比较关键的配置选项:

devtools

文档在这里,我对source map技术不太了解,所以几个选项真不知道是干什么的。不过好在下面的表格中有写哪些是production supported,随便选择一个就好,感觉结果区别不大。这里我选择了source-map,webpack一下后生成了2个包:

  • bundle.js:3.32 MB
  • bundle.js.map:3.78 MB

唔,这样好多了,把用于定位源码的source map分离出去了,一下子减少了一半以上的体积。(注:source map只会在浏览器devtools激活时加载,并不会影响正常的页面加载速度,具体可参考When is jQuery source map loaded?JavaScript Source Map 详解。)

plugins

webpack文档中有一节Optimization,讲到了一些优化技巧。Chunks略高级没用过,看前面两个吧。提到了3个插件:UglifyJsPlugin、OccurenceOrderPlugin、DedupePlugin,第一个插件应该都懂是干啥,后面两个描述得挺高深的,不过不懂没关系,全用上试试,反正没副作用:

plugins: [
    new webpack.optimize.UglifyJsPlugin({
        compress: {
            warnings: false
        }
    }),
    new webpack.optimize.DedupePlugin(),
    new webpack.optimize.OccurenceOrderPlugin()
]

打包结果:1.04 MB。

不要忽视NODE_ENV

NODE_ENV其实就是一个环境变量,在Node中可以通过process.env.NODE_ENV获取。目前大家往往用这个环境变量来标识当前到底是development还是production环境。

React提供了2个版本的代码(见:Development vs. Production Builds),production版性能更好:

We provide two versions of React: an uncompressed version for development and a minified version for production. The development version includes extra warnings about common mistakes, whereas the production version includes extra performance optimizations and strips all error messages.

同时在React文档中明确建议在生产环境下设置NODE_ENVproduction(见:npm):

Note: by default, React will be in development mode. To use React in production mode, set the environment variable NODE_ENV to production (using envify or webpack’s DefinePlugin). A minifier that performs dead-code elimination such as UglifyJS is recommended to completely remove the extra code present in development mode.

可以通过webpack的DefinePlugin设置环境变量,如下:

plugins: [
    ...
    new webpack.DefinePlugin({
        'process.env.NODE_ENV': JSON.stringify('production')
    }),
]

打包结果:844 KB。

虽然比之前的1 M减少得不多,不过可以提升React的运行性能,还是很值的。

OK,webpack到此为止,给出完整的webpack.config.prod.js

var path = require('path');
var webpack = require('webpack');

module.exports = {
    devtool: 'source-map',
    entry: [
        './index.js'
    ],
    output: {
        path: path.join(__dirname, 'webpack-output'),
        filename: 'bundle.js',
        publicPath: '/webpack-output/'
    },
    plugins: [
        new webpack.optimize.UglifyJsPlugin({
            compress: {
                warnings: false
            }
        }),
        new webpack.optimize.DedupePlugin(),
        new webpack.optimize.OccurenceOrderPlugin(),
        new webpack.DefinePlugin({
            'process.env.NODE_ENV': JSON.stringify('production')
        }),
    ],
    module: {
        loaders: [
            {
                test: /.js$/,
                loader: 'babel',
                exclude: /node_modules/,
                include: __dirname
            },
            {
                test: /\.css$/,
                loaders: ["style", "css"]
            },
            {
                test: /\.scss$/,
                loaders: ["style", "css", "sass"]
            }
        ]
    },
};

打包结果输出到webpack-output文件夹下。

使用FIS3添加hash

前端公认的Best Practice就是给资源打上hash标签,这对缓存静态资源很有用。webpack文档中有一节Long-term Caching就是专门讲这个的,然而配置起来好麻烦的样子,最后我还是选择了百度的FIS3

使用方法见文档,写得很详细。贴一下我的fis-conf.js

// 需要打包的文件
fis.set('project.files', ['index.html', 'static/**', 'webpack-output/**']);

// 压缩CSS
fis.match('*.css', {
    optimizer: fis.plugin('clean-css')
});

// 压缩PNG图片
fis.match('*.png', {
    optimizer: fis.plugin('png-compressor')
});

fis.match('*.{js,css,png}', {
    useHash: true,  // 启用hash
    domain: 'http://7xrdyx.com1.z0.glb.clouddn.com',    // 添加CDN前缀
});

其中,通过useHash: true启用了hash功能,同时压缩了CSS、PNG图片,然后通过domain添加了CDN前缀。

运行fis3 release -d ./output后,就把所有的文件打包到output文件夹下了,截个图:

使用CDN

844 KB虽然比最开始的8.4 M缩小到了1/10,但其实也有点大。包大小基本上已经压缩到极限了,但我们还可以通过CDN来加快页面加载时间。

我选择的是七牛,效果不错,而且免费额度够用。

上一步中我们已经用FIS3添加了七牛CDN的前缀,接下来就是上传打包文件了。手动上传太麻烦,七牛提供了一个用来批上传的命令行工具qrsync,具体用法见文档。

使用Fabric进行远程部署

部署的时候难免会涉及到登陆server执行部署命令,你可以手动操作,但我还是推荐用一些工具来做,方便自动化。这类工具不少,选择顺手的就行,我因为之前有过Python开发经验,所以一直用Fabric,很好用。安装下Python,然后安装包管理工具pip,然后sudo pip install fabric就行了。

在项目根目录下创建fabfile.py,通过Python代码描述远程部署过程:

# coding: utf-8
from fabric.api import run, env, cd

def deploy():
    env.host_string = "username@ip"
    with cd('/path/to/your/project'):
        run('git pull')
        run('npm install')
        run('webpack --progress --config webpack.config.prod.js')
        run('fis3 release -d ./output')
        run('qrsync qrsync.conf.json')

其中,env.host_string描述server信息,然后cd到项目文件夹,git pull从Git仓库拉取源码,npm install安装第三方库,接下来就是各种打包,最后批量上传到CDN。

本地执行fab deploy,就可以部署到生产服务器了。

Nginx

收尾工作交给Nginx:

  • 域名与本地文件夹路径关联起来
  • gzip支持:这个一定要做,效果很赞,具体启用方法就是将/etc/nginx/nginx.conf与gzip相关的东西uncomment一下就行
  • 不存在的path一律导向/index.html:否则在非根路径下刷新浏览器,就会出现404,开发React的童鞋应该都懂这个坑…

我的nginx.conf如下所示:

server {
    listen 80;
    server_name yourdomain.com;
    root /path/to/your/project;

    location / {
        try_files $uri /index.html;
    }
}

注:有童鞋可能奇怪为什么没有添加cache的配置,因为所有东西都上传到CDN了…

浏览器实际加载效果

在Chrome调试工具下看。

禁止缓存:

可以看到bundle的最终大小为206 KB,加载时间是118 ms。

启用缓存:

效果还不错。

开发->部署流程

从开发到部署的流程如下:

  • 写代码、本地调试
  • 代码提交到远程Git仓库
  • 部署:fab deploy

附:使用npm scripts

最近npm scripts有点火,很多人都用它来取代Grunt、Gulp做自动化构建。

我们将部署命令放到package.jsonscripts中,然后通过npm run <script-name>的方式调用不同的script,这样会更加的cleaner:

{
    "name": "your-project-name",
    "version": "1.0.0",
    "description": "",
    "main": "index.js",
    "scripts": {
        "start": "node server.js",
        "build": "webpack --progress --config webpack.config.prod.js && fis3 release -d ./output",
        "upload": "qrsync qrsync.conf.json",
        "deploy": "fab deploy"
    },
    ...
}

然后fabfile.py可以改写为:

# coding: utf-8
from fabric.api import run, env, cd

def deploy():
    env.host_string = "user@ip"
    with cd('/path/to/your/project'):
        run('git pull')
        run('npm install')
        run('npm run build')
        run('npm run upload')

部署命令变成:npm run deploy,更加赏心悦目。



Using geo-based data with SequelizeJS utilizing PostgreSQL and MS SQL Server in Node.js

$
0
0

Using geo-based data with SequelizeJS utilizing PostgreSQL and MS SQL Server in Node.js

I’m currently building an Angular 2 sample application, which will use location-based data. The app uses the browser’s navigator.geolocation feature to obtain the current position and send it to a server which returns a list of chat messages in a given radius around the sent coordinate. As a German student, you may know this from the app Jodel. For sample purposes only, the backend of the app can either use PostgreSQL or Microsoft SQL Server (MSSQL) which will be abstracted with the amazing SequelizeJS library. The app and the backend will later be open-sourced, so you can take a look at it yourself.

I’m pretty sure all the information in this blog post can be found elsewhere (and even in more detail). But it took me quite an amount of time to get it up and running. So I want to give you a condensed overview about it.

The intention of this blog post is to show the creation of a simple backend with the two different database engines. The code shown in this post is also hosted at Github. There is no talk about the Angular 2 frontend in this article, though.

Preparation

While MS SQL Server has a built-in Geographic Information System (GIS), PostgreSQL does not. Fortunately, PostgreSQL has an extension called PostGIS to support spatial- and geo-based data. Since I’m using a Mac for development, installing PostGIS is very easy if you use Postgres.app. It has integrated PostGIS support. If you don’t use the app, you need to refer to the PostGIS documentation for proper installation. After installing PostGIS you need to enable the extension for the database where you want to use it by executing CREATE EXTENSION postgis; against the database. That’s all you need to do.

Schema design

Both PostgreSQL and MSSQL support two different data types for spatial and geo-based data: geometry and geographic. Geometry data will be calculated on a planar plane. Geographic data, however, will be calculated on a sphere, which is defined by a Spatial Reference System Identifier(SRID, more on that below). Take a look at the following two images to see the difference.

Planar Coordinate System

Spherical Coordinate System

As you can see, within the planar coordinate system, a line would be drawn straight from New York to Berlin, resulting in not so accurate calculation results. As we all know the earth is not plane, therefore the spherical coordinate system takes that into account and calculates distances on a sphere, leading to more accurate results. Hopefully you don’t use a planar system to calculate the fuel for your airplane. ;-) In case of pure performance, geometry-based data will be faster, since the calculations are easier.

Some paragraphs above I mentioned a mandatory SRID when doing calculation on a spherical coordinate system. It is used to uniquely identify projected, unprojected or local spatial coordinate system definitions. In easier words, it identifies how your coordinates are mapped to a sphere where they are valid (e.g. whole world, or just a specific country) and which units they produce in case of calculations (kilometers, miles, …). For example, EPSG:4326/WGS84 is used for the worldwide GPS satellite navigation system, while EPSG:4258/ETRS89 can be used for calculations in Europe. It is also possible to convert data from one SRID into another SRID.

Before you start doing your schema or table design, you should consider whether you want to use geometry or geography. As a very simple rule of thumb: If you don’t need to calculate distances across the globe or you have data which represents the earth, just go with geometry. Otherwise take geography into account.

SequelizeJS and GIS

GIS support for SequelizeJS is, on the one hand, supported since 2014’ish. On the other hand, unfortunately, it is only implemented for PostgreSQL and PostGIS. There is a discussion going on for implementing a broader support for GIS. Another drawback is that only geometry is currently supported. If you need geography support, then SequelizeJS today can’t help you since it is not implemented as a data type at all. Nevertheless, for my little sample it is completely OK to go with geometry data, even when doing location-based search since the radius will be small enough to get good results. Actually, we can use SequelizeJS for both PostgreSQL and MSSQL! The next paragraphs explain what you need to do to achieve this.

Prepare SequelizeJS

For the sample backend I’m using Node.js v5.4.0. At the very first, we need to install the necessary dependencies. A simple npm i sequelize pg tedious  is what we need. sequelize  will install SequelizeJS. pg  is the database driver for PostgreSQL and tedious  the one for MSSQL.

Side note: There are official MSSQL drivers from Microsoft (here and here), but they are currently for Windows only.

Create the database connector class

Let’s start by creating a very simple and minimalistic class Database in ECMAScript 2015, which connects to the database and creates a model:

'use strict';

const Sequelize = require('sequelize');

function Database() {
    let sequelize;
    let dialect;
    let models = {};

    this.models = models;

    this.getDialect = function () {
        return dialect;
    };

    this.initialize = function (useMSSQL) {
        sequelize = useMSSQL ? connectToMSSQL() : connectToPostgreSQL();

        dialect = sequelize.connectionManager.dialectName;

        initializeModels();

        return syncDatabase();
    };

    function connectToMSSQL() {
        return new Sequelize('SampleDatabase', 'SampleUser', 'SamplePassword', {
            host: '10.211.55.3',
            dialect: 'mssql',
            dialectOptions: {
                instanceName: 'SQLEXPRESS2014'
            }
        });
    }

    function connectToPostgreSQL() {
        return new Sequelize('SampleDatabase', 'SampleUser', 'SamplePassword', {
            host: 'localhost',
            dialect: 'postgres'
        });
    }

    function initializeModels() {
        const SampleModel = sequelize.define('SampleModel', {
            id: {
                autoIncrement: true,
                type: Sequelize.INTEGER,
                primaryKey: true
            },
            point: {
                type: Sequelize.GEOMETRY('POINT'),
                allowNull: false
            }
        });

        models[SampleModel.name] = SampleModel;
    }

    function syncDatabase() {
        return sequelize.sync();
    }
}

module.exports = new Database();

Let’s dissect this code – first things first: Import Sequelize, so we can use it. Then we define the Database  class with a public field called models  and two public functions calledgetDialect and  initialize . The public field will hold our sample model, so we can use it later. The getDialect  function returns the used dialect either postgres  or mssql . The initialize  function is used to initialize and connect to the database. Within, we check if we want to connect to PostgreSQL or MSSQL. After connecting, we create a SampleModel  with an auto-incrementing primary key id  and a point  of type GEOMETRY(‘POINT’) . SequelizeJS supports different kinds of geometries, but that depends on the underlying database engine. With GEOMETRY(‘POINT’) we tell the database engine, we only want to store geometry of type point. Other valid kinds would be LINESTRING  or POLYGON . Or you can omit the type completely to use different kinds within the same column. At last, we store our model in our public field, so it is accessible via this.models.SampleModel  later on. Last, but not least, we use syncDatabase()  which calls sequelize.sync()  and returns a Promise . sequelize.sync()  will create the necessary tables for your defined models in this case.

*Side note: *All SequelizeJS methods which communicate with the database will return a Promise .

The module get’s exported as an instance/singleton.

Create the SampleService adapter

Next is a service class which will use our database and model to create entities and read data. The service will be a wrapper around the actual implementations for the different database engines and provides access methods which could be used by an user interface or Web API to access the data.

'use strict';

const SampleServiceMSSQL = require('./sampleService.mssql'),
        SampleServicePostgreSQL = require('./sampleService.postgres');

function SampleService(database) {
    const adapter = database.getDialect() === 'mssql'
            ? new SampleServiceMSSQL(database.models.SampleModel)
            : new SampleServicePostgreSQL(database.models.SampleModel);

    this.create = function (latitude, longitude) {
        // Do some input parameter validation

        const point = {
            type: 'Point',
            coordinates: [latitude, longitude]
        };

        return adapter.create(point);
    };

    this.getAround = function (latitude, longitude) {
        // Do some input parameter validation
        return adapter.getAround(latitude, longitude);
    };
}

module.exports = SampleService;

At first, we import two classes: SampleServiceMSSQL  and SampleServicePostgreSQL , since we need different approaches for handling our geometry data. Then we define a SampleService  which has a dependency to the database. Notice at the bottom that we export the class and not an instance. Remember, that database.initialize()  will return a Promise  when everything is set up. So we will construct the service later, when the Promise  has been resolved.

Within the class we check which underlying database engine we have. In case of MSSQL we construct SampleServiceMSSQL otherwise SampleServicePostgreSQL . Both of them get the model as their first argument. Same reason here: That ensures a resolved database.initialize()  Promise.

The class itself defines two methods. The first create()  will create a new entry in the database by the provided latitude  and longitude . To do so, a point  object is created with a property type  of value ‘Point’  and a property coordinates  containing an array with latitude  and longitude . This format is called GeoJSON and can be used throughout SequelizeJS. Then we call the adapter’s create  method.

Exactly the same is done with the second method getAround() . The purpose of this method will be to get all points in a radius around the given latitude  and longitude .

Please note, that this sample lacks any input validation by intention due to this blog posts scope.

Now we have a database and service class which functions as an adapter to the concrete implementations. Let’s build the implementations for PostgreSQL and MSSQL!

Implement the SampleServicePostgreSQL adapter class

We start by building the SampleServicePostgreSQL class:

'use strict';

function SampleServicePostgreSQL(model) {
    this.create = function (point) {
        return model.create({
            point: point
        });
    };

    this.getAround = function (latitude, longitude) {
        const query = `
SELECT
    "id", "createdAt", ST_Distance_Sphere(ST_MakePoint(:latitude, :longitude), "point") AS distance
FROM
    "SampleModels"
WHERE
    ST_Distance_Sphere(ST_MakePoint(:latitude, :longitude), "point") < :maxDistance
`;

        return model.sequelize.query(query, {
            replacements: {
                latitude: parseFloat(latitude),
                longitude: parseFloat(longitude),
                maxDistance: 10 * 1000
            },
            type: model.sequelize.QueryTypes.SELECT
        });
    };
}

module.exports = SampleServicePostgreSQL;

This is our adapter for PostgreSQL. The implementation of the create  method is really straightforward. Every SequelizeJS model contains a method create  which will insert the model data into the underlying database. Due to the support of PostGIS we can simply call model.create(point)  and let SequelizeJS take care of correctly inserting our data.

Let’s take a look at the getAround  method. As mentioned above, SequelizeJS has support for PostGIS. Unfortunately, it is a very basic support. It supports inserting, updating and reading, but no other methods like ST_Distance_Sphere , or ST_MakePoint  via a well-defined API abstraction. But according to this Github issue it is currently being discussed.  By the way, the mentioned methods are open standards from the Open Geospatial Consortium (OGC). We will see those methods later again, when implementing the MS SQL Server adapter.

Back to the getAround  method. First we declare our parameterized query. We select the id , the createdAt  and calculate a distance . OK, wait. What’s happening here? We don’t have a createdAt  property in our model, do we? Well, we have, but not an explicit one. Per default, SequelizeJS automatically creates an additional createdAt  and updatedAt  property for us and keeps track of them. SequelizeJS wouldn’t be SequelizeJS, if you can’t change this behavior.

What about the ST_Distance_Sphere(ST_MakePoint(:latitude, :longitude), “point”) AS distance? We useSTMakePoint to create a point from our latitude  and longitude  parameters. Then we use the result as the first parameter for STDistanceSphere. The second parameter “point”  references our table column. So for every row in our table SampleModels (SequelizeJS automatically pluralizes table names by default) we calculate the spherical distance (although it is a planar geometry object) between the given point and the one in our column. Be careful here and don’t get confused! ST_Distance_Sphere  calculates the distance with a given earth mean radius of 6370986 meters. If you want to use a real Spheroid according to the SRID mentioned above, you need to use STDistanceSpheroid.

The WHERE  part of the query will be used to only select data which is within a provided radius represented by the named parameter maxDistance. Last, but not least, we run this query against our PostgreSQL by calling model.sequelize.query . The first parameter is our query , the second is some options. As you may have noticed, we used named placeholders in our query. Therefore, we use the replacements  objects to tell SequelizeJS the values for placeholders. latitude  and longitude  are self-explanatory. maxDistance  is set to 10 kilometers, so we only get points in the given radius. With the type  property we set the type of the query to a SELECT  statement.

So far, so good, our PostgreSQL adapter is done. Let’s move on to the MSSQL adapter!

Implement the SampleServiceMSSQL adapter class

The code for the SampleServiceMSSQL  class is the following:

'use strict';

function SampleServiceMSSQL(model) {
    this.create = function (point) {
        const query = `
INSERT INTO [SampleModels]
    (
        [point],
        [createdAt],
        [updatedAt]
    )
VALUES
    (
        geometry::Point(${point.coordinates[0]}, ${point.coordinates[1]}, 0),
        ?,
        ?
    )`;

        return model.sequelize.query(query, {
            replacements: [
                new Date().toISOString(),
                new Date().toISOString()
            ],
            model: model,
            type: model.sequelize.QueryTypes.INSERT
        });
    };

    this.getAround = function (latitude, longitude) {
        const maxDistance = 10 * 1000;
        const earthMeanRadius = 6370986 * Math.PI / 180;

        const query = `
SELECT
    [id], [createdAt], [point].STDistance(geometry::Point(?, ?, 0)) * ? AS distance
FROM
    [SampleModels]
WHERE
    [point].STDistance(geometry::Point(?, ?, 0)) * ? < ?
        `;

        return model.sequelize.query(query, {
            replacements: [
                latitude,
                longitude,
                earthMeanRadius,
                latitude,
                longitude,
                earthMeanRadius,
                maxDistance
            ],
            type: model.sequelize.QueryTypes.SELECT
        });
    };
}

module.exports = SampleServiceMSSQL;

Let’s go through this, step by step. Due to the complete lack of geometry support in MSSQL we need to do everything manually now. Take a look at the create  method. We start with defining our INSERT query , and insert the values: point , createdAt  and updatedAt . If we execute a raw query we need to take care about setting the createdAt  and updatedAt  values. For the value of point  we use geometry::Point(${point.coordinates[0]}, ${point.coordinates[1]}, 0) . If you are not familiar with JavaScript’s templated strings this may hurt your eyes a bit. The syntax ${expression}  simply inserts the value into the string. geometry::Point()  is MSSQL’s equivalent to the ST_MakePoint  mentioned above with one difference. It wants to have a third parameter, the SRID. Since we don’t use it here we simply can use 0.

You may have noticed that we don’t use named parameters here. SequelizeJS automatically recognizes everything that is prefixed with a colon. So it would try to replace :Point  with a named parameter. Fortunately, the replacements objects can be an array as well and replaces all the question marks with the values defined in the order of their appearance. Additionally we supply a property model  with the value of our model . This tells SequelizeJS to automatically map the result of the INSERT  statement to our model. Finally, we set the kind of the query to INSERT .

Now to our last method getAround . It is basically the same as the one from the PostgreSQL adapter, but since we don’t use a SRID for calculation, MS SQL Server will calculate on a plane. Thats why we multiply the result with the earth mean radius to get the distance in meters. Note: This is slightly less accurate than the PostgreSQL version of calculation with ST_Distance_Sphere .

Wow. Take a deep breath, we have finished the database and service classes. The last thing to do is a bit of orchestration to try everything out!

Orchestration

Create a new index.js  file with the following content:

'use strict';

const database = require('./database'),
    Service = require('./sampleService');

let service;

database.initialize(false)
    .then(() => {
        service = new Service(database);

        return service.create(49.019994, 8.413086);
    })
    .then(() => {
        return service.getAround(49.013626, 8.404480);
    })
    .then(result => {
        console.log(result);
    });

Absolutely straight forward. Import the database and the SampleService  class. Then initialize the database with PostgreSQL connection. After initialization, create a new Service  with the database  and insert a coordinate. Then call service.getAround()  with another coordinate and print the result to the console. To run the sample app, open a terminal where you index.js  is located and execute node index.js . You should now see the distance between the Schloss Karlsruhe and the Wildparkstadion which looks like this:

Sample Output

SequelizeJS outputs the executed query per default (with the replaced values, which means you can easily execute the statement manually and take a look at its execution plan for optimizing. How awesome!). If you don’t like it, change it. ;-)

At the bottom of the output, right after the SQL statement, is our result (PostgreSQL):

[
    {
        "id": 3,
        "createdAt": "Fri Jan 08 2016 09:03:07 GMT+0100 (CET)",
        "distance": 1185.92294455
    }
]

The same sample executed with MS SQL Server results in:

[
    {
        "id": 4,
        "createdAt": "Fri Jan 08 2016 09:11:44 GMT+0100 (CET)",
        "distance": 1190.4306593755073
    }
]

As you can see, there is a slight distance difference (approx. 5 meters) which could increase, if the distances get greater. Since the sample app will only make use of data in a 10 km radius, it is completely ok.

If you want to download this sample, head over to Github.


聊一聊前端自动化测试

$
0
0

前言

为何要测试

以前不喜欢写测试,主要是觉得编写和维护测试用例非常的浪费时间。在真正写了一段时间的基础组件和基础工具后,才发现自动化测试有很多好处。测试最重要的自然是提升代码质量。代码有测试用例,虽不能说百分百无bug,但至少说明测试用例覆盖到的场景是没有问题的。有测试用例,发布前跑一下,可以杜绝各种疏忽而引起的功能bug。

自动化测试另外一个重要特点就是快速反馈,反馈越迅速意味着开发效率越高。拿UI组件为例,开发过程都是打开浏览器刷新页面点点点才能确定UI组件工作情况是否符合自己预期。接入自动化测试以后,通过脚本代替这些手动点击,接入代码watch后每次保存文件都能快速得知自己的的改动是否影响功能,节省了很多时间,毕竟机器干事情比人总是要快得多。

有了自动化测试,开发者会更加信任自己的代码。开发者再也不会惧怕将代码交给别人维护,不用担心别的开发者在代码里搞“破坏”。后人接手一段有测试用例的代码,修改起来也会更加从容。测试用例里非常清楚的阐释了开发者和使用者对于这端代码的期望和要求,也非常有利于代码的传承。

考虑投入产出比来做测试

说了这么多测试的好处,并不代表一上来就要写出100%场景覆盖的测试用例。个人一直坚持一个观点:基于投入产出比来做测试。由于维护测试用例也是一大笔开销(毕竟没有多少测试会专门帮前端写业务测试用例,而前端使用的流程自动化工具更是没有测试参与了)。对于像基础组件、基础模型之类的不常变更且复用较多的部分,可以考虑去写测试用例来保证质量。个人比较倾向于先写少量的测试用例覆盖到80%+的场景,保证覆盖主要使用流程。一些极端场景出现的bug可以在迭代中形成测试用例沉淀,场景覆盖也将逐渐趋近100%。但对于迭代较快的业务逻辑以及生存时间不长的活动页面之类的就别花时间写测试用例了,维护测试用例的时间大了去了,成本太高。

Node.js模块的测试

对于Node.js的模块,测试算是比较方便的,毕竟源码和依赖都在本地,看得见摸得着。

测试工具

测试主要使用到的工具是测试框架、断言库以及代码覆盖率工具:

  1. 测试框架:MochaJasmine等等,测试主要提供了清晰简明的语法来描述测试用例,以及对测试用例分组,测试框架会抓取到代码抛出的AssertionError,并增加一大堆附加信息,比如那个用例挂了,为什么挂等等。测试框架通常提供TDD(测试驱动开发)或BDD(行为驱动开发)的测试语法来编写测试用例,关于TDD和BDD的对比可以看一篇比较知名的文章The Difference Between TDD and BDD。不同的测试框架支持不同的测试语法,比如Mocha既支持TDD也支持BDD,而Jasmine只支持BDD。这里后续以Mocha的BDD语法为例
  2. 断言库:Should.jschaiexpect.js等等,断言库提供了很多语义化的方法来对值做各种各样的判断。当然也可以不用断言库,Node.js中也可以直接使用原生assert库。这里后续以Should.js为例
  3. 代码覆盖率:istanbul等等为代码在语法级分支上打点,运行了打点后的代码,根据运行结束后收集到的信息和打点时的信息来统计出当前测试用例的对源码的覆盖情况。

一个煎蛋的栗子

以如下的Node.js项目结构为例

.
├── LICENSE
├── README.md
├── index.js
├── node_modules
├── package.json
└── test
    └── test.js

首先自然是安装工具,这里先装测试框架和断言库:npm install --save-dev mocha should。装完后就可以开始测试之旅了。

比如当前有一段js代码,放在index.js

'use strict';
module.exports = () => 'Hello Tmall';

那么对于这么一个函数,首先需要定一个测试用例,这里很明显,运行函数,得到字符串Hello Tmall就算测试通过。那么就可以按照Mocha的写法来写一个测试用例,因此新建一个测试代码在test/index.js

'use strict';
require('should');
const mylib = require('../index');

describe('My First Test', () => {
  it('should get "Hello Tmall"', () => {
    mylib().should.be.eql('Hello Tmall');
  });
});

测试用例写完了,那么怎么知道测试结果呢?

由于我们之前已经安装了Mocha,可以在node_modules里面找到它,Mocha提供了命令行工具_mocha,可以直接在./node_modules/.bin/_mocha找到它,运行它就可以执行测试了:

Hello Tmall

这样就可以看到测试结果了。同样我们可以故意让测试不通过,修改test.js代码为:

'use strict';
require('should');
const mylib = require('../index');

describe('My First Test', () => {
  it('should get "Hello Taobao"', () => {
    mylib().should.be.eql('Hello Taobao');
  });
});

就可以看到下图了:

Taobao is different with Tmall

Mocha实际上支持很多参数来提供很多灵活的控制,比如使用./node_modules/.bin/_mocha --require should,Mocha在启动测试时就会自己去加载Should.js,这样test/test.js里就不需要手动require('should');了。更多参数配置可以查阅Mocha官方文档

那么这些测试代码分别是啥意思呢?

这里首先引入了断言库Should.js,然后引入了自己的代码,这里it()函数定义了一个测试用例,通过Should.js提供的api,可以非常语义化的描述测试用例。那么describe又是干什么的呢?

describe干的事情就是给测试用例分组。为了尽可能多的覆盖各种情况,测试用例往往会有很多。这时候通过分组就可以比较方便的管理(这里提一句,describe是可以嵌套的,也就是说外层分组了之后,内部还可以分子组)。另外还有一个非常重要的特性,就是每个分组都可以进行预处理(beforebeforeEach)和后处理(after, afterEach)。

如果把index.js源码改为:

'use strict';
module.exports = bu => `Hello ${bu}`;

为了测试不同的bu,测试用例也对应的改为:

'use strict';
require('should');
const mylib = require('../index');
let bu = 'none';

describe('My First Test', () => {
  describe('Welcome to Tmall', () => {
    before(() => bu = 'Tmall');
    after(() => bu = 'none');
    it('should get "Hello Tmall"', () => {
      mylib(bu).should.be.eql('Hello Tmall');
    });
  });
  describe('Welcome to Taobao', () => {
    before(() => bu = 'Taobao');
    after(() => bu = 'none');
    it('should get "Hello Taobao"', () => {
      mylib(bu).should.be.eql('Hello Taobao');
    });
  });
});

同样运行一下./node_modules/.bin/_mocha就可以看到如下图:

all bu welcomes you

这里before会在每个分组的所有测试用例运行前,相对的after则会在所有测试用例运行后执行,如果要以测试用例为粒度,可以使用beforeEachafterEach,这两个钩子则会分别在该分组每个测试用例运行前和运行后执行。由于很多代码都需要模拟环境,可以再这些beforebeforeEach做这些准备工作,然后在afterafterEach里做回收操作。

异步代码的测试

回调

这里很显然代码都是同步的,但很多情况下我们的代码都是异步执行的,那么异步的代码要怎么测试呢?

比如这里index.js的代码变成了一段异步代码:

'use strict';
module.exports = (bu, callback) => process.nextTick(() => callback(`Hello ${bu}`));

由于源代码变成异步,所以测试用例就得做改造:

'use strict';
require('should');
const mylib = require('../index');

describe('My First Test', () => {
  it('Welcome to Tmall', done => {
    mylib('Tmall', rst => {
      rst.should.be.eql('Hello Tmall');
      done();
    });
  });
});

这里传入it的第二个参数的函数新增了一个done参数,当有这个参数时,这个测试用例会被认为是异步测试,只有在done()执行时,才认为测试结束。那如果done()一直没有执行呢?Mocha会触发自己的超时机制,超过一定时间(默认是2s,时长可以通过--timeout参数设置)就会自动终止测试,并以测试失败处理。

当然,beforebeforeEachafterafterEach这些钩子,同样支持异步,使用方式和it一样,在传入的函数第一个参数加上done,然后在执行完成后执行即可。

Promise

平常我们直接写回调会感觉自己很low,也容易出现回调金字塔,我们可以使用Promise来做异步控制,那么对于Promise控制下的异步代码,我们要怎么测试呢?

首先把源码做点改造,返回一个Promise对象:

'use strict';
module.exports = bu => new Promise(resolve => resolve(`Hello ${bu}`));

当然,如果是co党也可以直接使用co包裹:

'use strict';
const co = require('co');
module.exports = co.wrap(function* (bu) {
  return `Hello ${bu}`;
});

对应的修改测试用例如下:

'use strict';
require('should');
const mylib = require('../index');

describe('My First Test', () => {
  it('Welcome to Tmall', () => {
    return mylib('Tmall').should.be.fulfilledWith('Hello Tmall');
  });
});

Should.js在8.x.x版本自带了Promise支持,可以直接使用fullfilled()rejected()fullfilledWith()rejectedWith()等等一系列API测试Promise对象。

注意:使用should测试Promise对象时,请一定要return,一定要return,一定要return,否则断言将无效

异步运行测试

有时候,我们可能并不只是某个测试用例需要异步,而是整个测试过程都需要异步执行。比如测试Gulp插件的一个方案就是,首先运行Gulp任务,完成后测试生成的文件是否和预期的一致。那么如何异步执行整个测试过程呢?

其实Mocha提供了异步启动测试,只需要在启动Mocha的命令后加上--delay参数,Mocha就会以异步方式启动。这种情况下我们需要告诉Mocha什么时候开始跑测试用例,只需要执行run()方法即可。把刚才的test/test.js修改成下面这样:

'use strict';
require('should');
const mylib = require('../index');

setTimeout(() => {
  describe('My First Test', () => {
    it('Welcome to Tmall', () => {
      return mylib('Tmall').should.be.fulfilledWith('Hello Tmall');
    });
  });
  run();
}, 1000);

直接执行./node_modules/.bin/_mocha就会发生下面这样的杯具:

no cases

那么加上--delay试试:

oh my green

熟悉的绿色又回来了!

代码覆盖率

单元测试玩得差不多了,可以开始试试代码覆盖率了。首先需要安装代码覆盖率工具istanbul:npm install --save-dev istanbul,istanbul同样有命令行工具,在./node_modules/.bin/istanbul可以寻觅到它的身影。Node.js端做代码覆盖率测试很简单,只需要用istanbul启动Mocha即可,比如上面那个测试用例,运行./node_modules/.bin/istanbul cover ./node_modules/.bin/_mocha -- --delay,可以看到下图:

my first coverage

这就是代码覆盖率结果了,因为index.js中的代码比较简单,所以直接就100%了,那么修改一下源码,加个if吧:

'use strict';
module.exports = bu => new Promise(resolve => {
  if (bu === 'Tmall') return resolve(`Welcome to Tmall`);
  resolve(`Hello ${bu}`);
});

测试用例也跟着变一下:

'use strict';
require('should');
const mylib = require('../index');

setTimeout(() => {
  describe('My First Test', () => {
    it('Welcome to Tmall', () => {
      return mylib('Tmall').should.be.fulfilledWith('Welcome to Tmall');
    });
  });
  run();
}, 1000);

换了姿势,我们再来一次./node_modules/.bin/istanbul cover ./node_modules/.bin/_mocha -- --delay,可以得到下图:

coverage again

当使用istanbul运行Mocha时,istanbul命令自己的参数放在--之前,需要传递给Mocha的参数放在--之后

如预期所想,覆盖率不再是100%了,这时候我想看看哪些代码被运行了,哪些没有,怎么办呢?

运行完成后,项目下会多出一个coverage文件夹,这里就是放代码覆盖率结果的地方,它的结构大致如下:

.
├── coverage.json
├── lcov-report
│   ├── base.css
│   ├── index.html
│   ├── prettify.css
│   ├── prettify.js
│   ├── sort-arrow-sprite.png
│   ├── sorter.js
│   └── test
│       ├── index.html
│       └── index.js.html
└── lcov.info
  • coverage.json和lcov.info:测试结果描述的json文件,这个文件可以被一些工具读取,生成可视化的代码覆盖率结果,这个文件后面接入持续集成时还会提到。
  • lcov-report:通过上面两个文件由工具处理后生成的覆盖率结果页面,打开可以非常直观的看到代码的覆盖率

这里open coverage/lcov-report/index.html可以看到文件目录,点击对应的文件进入到文件详情,可以看到index.js的覆盖率如图所示:

coverage report

这里有四个指标,通过这些指标,可以量化代码覆盖情况:

  • statements:可执行语句执行情况
  • branches:分支执行情况,比如if就会产生两个分支,我们只运行了其中的一个
  • Functions:函数执行情况
  • Lines:行执行情况

下面代码部分,没有被执行过得代码会被标红,这些标红的代码往往是bug滋生的土壤,我们要尽可能消除这些红色。为此我们添加一个测试用例:

'use strict';
require('should');
const mylib = require('../index');

setTimeout(() => {
  describe('My First Test', () => {
    it('Welcome to Tmall', () => {
      return mylib('Tmall').should.be.fulfilledWith('Welcome to Tmall');
    });
    it('Hello Taobao', () => {
      return mylib('Taobao').should.be.fulfilledWith('Hello Taobao');
    });
  });
  run();
}, 1000);

再来一次./node_modules/.bin/istanbul cover ./node_modules/.bin/_mocha -- --delay,重新打开覆盖率页面,可以看到红色已经消失了,覆盖率100%。目标完成,可以睡个安稳觉了

集成到package.json

好了,一个简单的Node.js测试算是做完了,这些测试任务都可以集中写到package.jsonscripts字段中,比如:

{
  "scripts": {
    "test": "NODE_ENV=test ./node_modules/.bin/_mocha --require should",
    "cov": "NODE_ENV=test ./node_modules/.bin/istanbul cover ./node_modules/.bin/_mocha -- --delay"
  },
}

这样直接运行npm run test就可以跑单元测试,运行npm run cov就可以跑代码覆盖率测试了,方便快捷

对多个文件分别做测试

通常我们的项目都会有很多文件,比较推荐的方法是对每个文件单独去做测试。比如代码在./lib/下,那么./lib/文件夹下的每个文件都应该对应一个./test/文件夹下的文件名_spec.js的测试文件

为什么要这样呢?不能直接运行index.js入口文件做测试吗?

直接从入口文件来测其实是黑盒测试,我们并不知道代码内部运行情况,只是看某个特定的输入能否得到期望的输出。这通常可以覆盖到一些主要场景,但是在代码内部的一些边缘场景,就很难直接通过从入口输入特定的数据来解决了。比如代码里需要发送一个请求,入口只是传入一个url,url本身正确与否只是一个方面,当时的网络状况和服务器状况是无法预知的。传入相同的url,可能由于服务器挂了,也可能因为网络抖动,导致请求失败而抛出错误,如果这个错误没有得到处理,很可能导致故障。因此我们需要把黑盒打开,对其中的每个小块做白盒测试。

当然,并不是所有的模块测起来都这么轻松,前端用Node.js常干的事情就是写构建插件和自动化工具,典型的就是Gulp插件和命令行工具,那么这俩种特定的场景要怎么测试呢?

Gulp插件的测试

现在前端构建使用最多的就是Gulp了,它简明的API、流式构建理念、以及在内存中操作的性能,让它备受追捧。虽然现在有像webpack这样的后起之秀,但Gulp依旧凭借着其繁荣的生态圈担当着前端构建的绝对主力。目前天猫前端就是使用Gulp作为代码构建工具。

用了Gulp作为构建工具,也就免不了要开发Gulp插件来满足业务定制化的构建需求,构建过程本质上其实是对源代码进行修改,如果修改过程中出现bug很可能直接导致线上故障。因此针对Gulp插件,尤其是会修改源代码的Gulp插件一定要做仔细的测试来保证质量。

又一个煎蛋的栗子

比如这里有个煎蛋的Gulp插件,功能就是往所有js代码前加一句注释// 天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com,Gulp插件的代码大概就是这样:

'use strict';

const _ = require('lodash');
const through = require('through2');
const PluginError = require('gulp-util').PluginError;
const DEFAULT_CONFIG = {};

module.exports = config => {
  config = _.defaults(config || {}, DEFAULT_CONFIG);
  return through.obj((file, encoding, callback) => {
    if (file.isStream()) return callback(new PluginError('gulp-welcome-to-tmall', `Stream is not supported`));
    file.contents = new Buffer(`// 天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com\n${file.contents.toString()}`);
    callback(null, file);
  });
};

对于这么一段代码,怎么做测试呢?

一种方式就是直接伪造一个文件传入,Gulp内部实际上是通过vinyl-fs从操作系统读取文件并做成虚拟文件对象,然后将这个虚拟文件对象交由through2创造的Transform来改写流中的内容,而外层任务之间通过orchestrator控制,保证执行顺序(如果不了解可以看看这篇翻译文章Gulp思维——Gulp高级技巧)。当然一个插件不需要关心Gulp的任务管理机制,只需要关心传入一个vinyl对象能否正确处理。因此只需要伪造一个虚拟文件对象传给我们的Gulp插件就可以了。

首先设计测试用例,考虑两个主要场景:

  1. 虚拟文件对象是流格式的,应该抛出错误
  2. 虚拟文件对象是Buffer格式的,能够正常对文件内容进行加工,加工完的文件加上// 天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com的头

对于第一个测试用例,我们需要创建一个流格式的vinyl对象。而对于各第二个测试用例,我们需要创建一个Buffer格式的vinyl对象。

当然,首先我们需要一个被加工的源文件,放到test/src/testfile.js下吧:

'use strict';
console.log('hello world');

这个源文件非常简单,接下来的任务就是把它分别封装成流格式的vinyl对象和Buffer格式的vinyl对象。

构建Buffer格式的虚拟文件对象

构建一个Buffer格式的虚拟文件对象可以用vinyl-fs读取操作系统里的文件生成vinyl对象,Gulp内部也是使用它,默认使用Buffer:

'use strict';
require('should');
const path = require('path');
const vfs = require('vinyl-fs');
const welcome = require('../index');

describe('welcome to Tmall', function() {
  it('should work when buffer', done => {
    vfs.src(path.join(__dirname, 'src', 'testfile.js'))
      .pipe(welcome())
      .on('data', function(vf) {
        vf.contents.toString().should.be.eql(`// 天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com\n'use strict';\nconsole.log('hello world');\n`);
        done();
      });
  });
});

这样测了Buffer格式后算是完成了主要功能的测试,那么要如何测试流格式呢?

构建流格式的虚拟文件对象

方案一和上面一样直接使用vinyl-fs,增加一个参数buffer: false即可:

把代码修改成这样:

'use strict';
require('should');
const path = require('path');
const vfs = require('vinyl-fs');
const PluginError = require('gulp-util').PluginError;
const welcome = require('../index');

describe('welcome to Tmall', function() {
  it('should work when buffer', done => {
    // blabla
  });
  it('should throw PluginError when stream', done => {
    vfs.src(path.join(__dirname, 'src', 'testfile.js'), {
      buffer: false
    })
      .pipe(welcome())
      .on('error', e => {
        e.should.be.instanceOf(PluginError);
        done();
      });
  });
});

这样vinyl-fs直接从文件系统读取文件并生成流格式的vinyl对象。

如果内容并不来自于文件系统,而是来源于一个已经存在的可读流,要怎么把它封装成一个流格式的vinyl对象呢?

这样的需求可以借助vinyl-source-stream

'use strict';
require('should');
const fs = require('fs');
const path = require('path');
const source = require('vinyl-source-stream');
const vfs = require('vinyl-fs');
const PluginError = require('gulp-util').PluginError;
const welcome = require('../index');

describe('welcome to Tmall', function() {
  it('should work when buffer', done => {
    // blabla
  });
  it('should throw PluginError when stream', done => {
    fs.createReadStream(path.join(__dirname, 'src', 'testfile.js'))
      .pipe(source())
      .pipe(welcome())
      .on('error', e => {
        e.should.be.instanceOf(PluginError);
        done();
      });
  });
});

这里首先通过fs.createReadStream创建了一个可读流,然后通过vinyl-source-stream把这个可读流包装成流格式的vinyl对象,并交给我们的插件做处理

Gulp插件执行错误时请抛出PluginError,这样能够让gulp-plumber这样的插件进行错误管理,防止错误终止构建进程,这在gulp watch时非常有用

模拟Gulp运行

我们伪造的对象已经可以跑通功能测试了,但是这数据来源终究是自己伪造的,并不是用户日常的使用方式。如果采用最接近用户使用的方式来做测试,测试结果才更加可靠和真实。那么问题来了,怎么模拟真实的Gulp环境来做Gulp插件的测试呢?

首先模拟一下我们的项目结构:

test
├── build
│   └── testfile.js
├── gulpfile.js
└── src
    └── testfile.js

一个简易的项目结构,源码放在src下,通过gulpfile来指定任务,构建结果放在build下。按照我们平常使用方式在test目录下搭好架子,并且写好gulpfile.js:

'use strict';
const gulp = require('gulp');
const welcome = require('../index');
const del = require('del');

gulp.task('clean', cb => del('build', cb));

gulp.task('default', ['clean'], () => {
  return gulp.src('src/**/*')
    .pipe(welcome())
    .pipe(gulp.dest('build'));
});

接着在测试代码里来模拟Gulp运行了,这里有两种方案:

  1. 使用child_process库提供的spawnexec开子进程直接跑gulp命令,然后测试build目录下是否是想要的结果
  2. 直接在当前进程获取gulpfile中的Gulp实例来运行Gulp任务,然后测试build目录下是否是想要的结果

开子进程进行测试有一些坑,istanbul测试代码覆盖率时时无法跨进程的,因此开子进程测试,首先需要子进程执行命令时加上istanbul,然后还需要手动去收集覆盖率数据,当开启多个子进程时还需要自己做覆盖率结果数据合并,相当麻烦。

那么不开子进程怎么做呢?可以借助run-gulp-task这个工具来运行,其内部的机制就是首先获取gulpfile文件内容,在文件尾部加上module.exports = gulp;后require gulpfile从而获取Gulp实例,然后将Gulp实例递交给run-sequence调用内部未开放的APIgulp.run来运行。

我们采用不开子进程的方式,把运行Gulp的过程放在before钩子中,测试代码变成下面这样:

'use strict';
require('should');
const path = require('path');
const run = require('run-gulp-task');
const CWD = process.cwd();
const fs = require('fs');

describe('welcome to Tmall', () => {
  before(done => {
    process.chdir(__dirname);
    run('default', path.join(__dirname, 'gulpfile.js'))
      .catch(e => e)
      .then(e => {
        process.chdir(CWD);
        done(e);
      });
  });
  it('should work', function() {
    fs.readFileSync(path.join(__dirname, 'build', 'testfile.js')).toString().should.be.eql(`// 天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com\n'use strict';\nconsole.log('hello world');\n`);
  });
});

这样由于不需要开子进程,代码覆盖率测试也可以和普通Node.js模块一样了

测试命令行输出

双一个煎蛋的栗子

当然前端写工具并不只限于Gulp插件,偶尔还会写一些辅助命令啥的,这些辅助命令直接在终端上运行,结果也会直接展示在终端上。比如一个简单的使用commander实现的命令行工具:

// in index.js
'use strict';
const program = require('commander');
const path = require('path');
const pkg = require(path.join(__dirname, 'package.json'));

program.version(pkg.version)
  .usage('[options] <file>')
  .option('-t, --test', 'Run test')
  .action((file, prog) => {
    if (prog.test) console.log('test');
  });

module.exports = program;

// in bin/cli
#!/usr/bin/env node
'use strict';
const program = require('../index.js');

program.parse(process.argv);

!program.args[0] && program.help();

// in package.json
{
  "bin": {
    "cli-test": "./bin/cli"
  }
}

拦截输出

要测试命令行工具,自然要模拟用户输入命令,这一次依旧选择不开子进程,直接用伪造一个process.argv交给program.parse即可。命令输入了问题也来了,数据是直接console.log的,要怎么拦截呢?

这可以借助sinon来拦截console.log,而且sinon非常贴心的提供了mocha-sinon方便测试用,这样test.js大致就是这个样子:

'use strict';
require('should');
require('mocha-sinon');
const program = require('../index');
const uncolor = require('uncolor');

describe('cli-test', () => {
  let rst;
  beforeEach(function() {
    this.sinon.stub(console, 'log', function() {
      rst = arguments[0];
    });
  });
  it('should print "test"', () => {
    program.parse([
      'node',
      './bin/cli',
      '-t',
      'file.js'
    ]);
    return uncolor(rst).trim().should.be.eql('test');
  });
});

PS:由于命令行输出时经常会使用colors这样的库来添加颜色,因此在测试时记得用uncolor把这些颜色移除

小结

Node.js相关的单元测试就扯这么多了,还有很多场景像服务器测试什么的就不扯了,因为我不会。当然前端最主要的工作还是写页面,接下来扯一扯如何对页面上的组件做测试。

页面测试

对于浏览器里跑的前端代码,做测试要比Node.js模块要麻烦得多。Node.js模块纯js代码,使用V8运行在本地,测试用的各种各样的依赖和工具都能快速的安装,而前端代码不仅仅要测试js,CSS等等,更麻烦的事需要模拟各种各样的浏览器,比较常见的前端代码测试方案有下面几种:

  1. 构建一个测试页面,人肉直接到虚拟机上开各种浏览器跑测试页面(比如公司的f2etest)。这个方案的缺点就是不好做代码覆盖率测试,也不好持续化集成,同时人肉工作较多
  2. 使用PhantomJS构建一个伪造的浏览器环境跑单元测试,好处是解决了代码覆盖率问题,也可以做持续集成。这个方案的缺点是PhantomJS毕竟是Qt的webkit,并不是真实浏览器环境,PhantomJS也有各种各样兼容性坑
  3. 通过Karma调用本机各种浏览器进行测试,好处是可以跨浏览器做测试,也可以测试覆盖率,但持续集成时需要注意只能开PhantomJS做测试,毕竟集成的Linux环境不可能有浏览器。这可以说是目前看到的最好的前端代码测试方式了

这里以gulp为构建工具做测试,后面在React组件测试部分再介绍以webpack为构建工具做测试

叒一个煎蛋的栗子

前端代码依旧是js,一样可以用Mocha+Should.js来做单元测试。打开node_modules下的Mocha和Should.js,你会发现这些优秀的开源工具已经非常贴心的提供了可在浏览器中直接运行的版本:mocha/mocha.jsshould/should.min.js,只需要把他们通过script标签引入即可,另外Mocha还需要引入自己的样式mocha/mocha.css

首先看一下我们的前端项目结构:

.
├── gulpfile.js
├── package.json
├── src
│   └── index.js
└── test
    ├── test.html
    └── test.js

比如这里源码src/index.js就是定义一个全局函数:

window.render = function() {
  var ctn = document.createElement('div');
  ctn.setAttribute('id', 'tmall');
  ctn.appendChild(document.createTextNode('天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com'));
  document.body.appendChild(ctn);
}

而测试页面test/test.html大致上是这个样子:

<!DOCTYPE html>
<html>

<head>
  <meta charset="utf-8">
  <link rel="stylesheet" href="../node_modules/mocha/mocha.css"/>
  <script src="../node_modules/mocha/mocha.js"></script>
  <script src="../node_modules/should/should.js"></script>
</head>

<body>
  <div id="mocha"></div>
  <script src="../src/index.js"></script>
  <script src="test.js"></script>
</body>

</html>

head里引入了测试框架Mocha和断言库Should.js,测试的结果会被显示在

这个容器里,而test/test.js里则是我们的测试的代码。

前端页面上测试和Node.js上测试没啥太大不同,只是需要指定Mocha使用的UI,并需要手动调用mocha.run()

mocha.ui('bdd');
describe('Welcome to Tmall', function() {
  before(function() {
    window.render();
  });
  it('Hello', function() {
    document.getElementById('tmall').textContent.should.be.eql('天猫前端招人,有意向的请发送简历至lingyucoder@gmail.com');
  });
});
mocha.run();

在浏览器里打开test/test.html页面,就可以看到效果了:

test page

在不同的浏览器里打开这个页面,就可以看到当前浏览器的测试了。这种方式能兼容最多的浏览器,当然要跨机器之前记得把资源上传到一个测试机器都能访问到的地方,比如CDN。

测试页面有了,那么来试试接入PhantomJS吧

使用PhantomJS进行测试

PhantomJS是一个模拟的浏览器,它能执行js,甚至还有webkit渲染引擎,只是没有浏览器的界面上渲染结果罢了。我们可以使用它做很多事情,比如对网页进行截图,写爬虫爬取异步渲染的页面,以及接下来要介绍的——对页面做测试。

当然,这里我们不是直接使用PhantomJS,而是使用mocha-phantomjs来做测试。npm install --save-dev mocha-phantomjs安装完成后,就可以运行命令./node_modules/.bin/mocha-phantomjs ./test/test.html来对上面那个test/test.html的测试了:

PhantomJS test

单元测试没问题了,接下来就是代码覆盖率测试

覆盖率打点

首先第一步,改写我们的gulpfile.js

'use strict';
const gulp = require('gulp');
const istanbul = require('gulp-istanbul');

gulp.task('test', function() {
  return gulp.src(['src/**/*.js'])
    .pipe(istanbul({
      coverageVariable: '__coverage__'
    }))
    .pipe(gulp.dest('build-test'));
});

这里把覆盖率结果保存到__coverage__里面,把打完点的代码放到build-test目录下,比如刚才的src/index.js的代码,在运行gulp test后,会生成build-test/index.js,内容大致是这个样子:

var __cov_WzFiasMcIh_mBvAjOuQiQg = (Function('return this'))();
if (!__cov_WzFiasMcIh_mBvAjOuQiQg.__coverage__) { __cov_WzFiasMcIh_mBvAjOuQiQg.__coverage__ = {}; }
__cov_WzFiasMcIh_mBvAjOuQiQg = __cov_WzFiasMcIh_mBvAjOuQiQg.__coverage__;
if (!(__cov_WzFiasMcIh_mBvAjOuQiQg['/Users/lingyu/gitlab/dev/mui/test-page/src/index.js'])) {
   __cov_WzFiasMcIh_mBvAjOuQiQg['/Users/lingyu/gitlab/dev/mui/test-page/src/index.js'] = {"path":"/Users/lingyu/gitlab/dev/mui/test-page/src/index.js","s":{"1":0,"2":0,"3":0,"4":0,"5":0},"b":{},"f":{"1":0},"fnMap":{"1":{"name":"(anonymous_1)","line":1,"loc":{"start":{"line":1,"column":16},"end":{"line":1,"column":27}}}},"statementMap":{"1":{"start":{"line":1,"column":0},"end":{"line":6,"column":1}},"2":{"start":{"line":2,"column":2},"end":{"line":2,"column":42}},"3":{"start":{"line":3,"column":2},"end":{"line":3,"column":34}},"4":{"start":{"line":4,"column":2},"end":{"line":4,"column":85}},"5":{"start":{"line":5,"column":2},"end":{"line":5,"column":33}}},"branchMap":{}};
}
__cov_WzFiasMcIh_mBvAjOuQiQg = __cov_WzFiasMcIh_mBvAjOuQiQg['/Users/lingyu/gitlab/dev/mui/test-page/src/index.js'];
__cov_WzFiasMcIh_mBvAjOuQiQg.s['1']++;window.render=function(){__cov_WzFiasMcIh_mBvAjOuQiQg.f['1']++;__cov_WzFiasMcIh_mBvAjOuQiQg.s['2']++;var ctn=document.createElement('div');__cov_WzFiasMcIh_mBvAjOuQiQg.s['3']++;ctn.setAttribute('id','tmall');__cov_WzFiasMcIh_mBvAjOuQiQg.s['4']++;ctn.appendChild(document.createTextNode('天猫前端招人\uFF0C有意向的请发送简历至lingyucoder@gmail.com'));__cov_WzFiasMcIh_mBvAjOuQiQg.s['5']++;document.body.appendChild(ctn);};

这都什么鬼!不管了,反正运行它就好。把test/test.html里面引入的代码从src/index.js修改为build-test/index.js,保证页面运行时使用的是编译后的代码。

编写钩子

运行数据会存放到变量__coverage__里,但是我们还需要一段钩子代码在单元测试结束后获取这个变量里的内容。把钩子代码放在test/hook.js下,里面内容这样写:

'use strict';

var fs = require('fs');

module.exports = {
  afterEnd: function(runner) {
    var coverage = runner.page.evaluate(function() {
      return window.__coverage__;
    });
    if (coverage) {
      console.log('Writing coverage to coverage/coverage.json');
      fs.write('coverage/coverage.json', JSON.stringify(coverage), 'w');
    } else {
      console.log('No coverage data generated');
    }
  }
};

这样准备工作工作就大功告成了,执行命令./node_modules/.bin/mocha-phantomjs ./test/test.html --hooks ./test/hook.js,可以看到如下图结果,同时覆盖率结果被写入到coverage/coverage.json里面了。

coverage hook

生成页面

有了结果覆盖率结果就可以生成覆盖率页面了,首先看看覆盖率概况吧。执行命令./node_modules/.bin/istanbul report --root coverage text-summary,可以看到下图:

coverage summary

还是原来的配方,还是想熟悉的味道。接下来运行./node_modules/.bin/istanbul report --root coverage lcov生成覆盖率页面,执行完后open coverage/lcov-report/index.html,点击进入到src/index.js

coverage page

一颗赛艇!这样我们对前端代码就能做覆盖率测试了

接入Karma

Karma是一个测试集成框架,可以方便地以插件的形式集成测试框架、测试环境、覆盖率工具等等。Karma已经有了一套相当完善的插件体系,这里尝试在PhantomJS、Chrome、FireFox下做测试,首先需要使用npm安装一些依赖:

  1. karma:框架本体
  2. karma-mocha:Mocha测试框架
  3. karma-coverage:覆盖率测试
  4. karma-spec-reporter:测试结果输出
  5. karma-phantomjs-launcher:PhantomJS环境
  6. phantomjs-prebuilt: PhantomJS最新版本
  7. karma-chrome-launcher:Chrome环境
  8. karma-firefox-launcher:Firefox环境

安装完成后,就可以开启我们的Karma之旅了。还是之前的那个项目,我们把该清除的清除,只留下源文件和而是文件,并增加一个karma.conf.js文件:

.
├── karma.conf.js
├── package.json
├── src
│   └── index.js
└── test
    └── test.js

karma.conf.js是Karma框架的配置文件,在这个例子里,它大概是这个样子:

'use strict';

module.exports = function(config) {
  config.set({
    frameworks: ['mocha'],
    files: [
      './node_modules/should/should.js',
      'src/**/*.js',
      'test/**/*.js'
    ],
    preprocessors: {
      'src/**/*.js': ['coverage']
    },
    plugins: ['karma-mocha', 'karma-phantomjs-launcher', 'karma-chrome-launcher', 'karma-firefox-launcher', 'karma-coverage', 'karma-spec-reporter'],
    browsers: ['PhantomJS', 'Firefox', 'Chrome'],
    reporters: ['spec', 'coverage'],
    coverageReporter: {
      dir: 'coverage',
      reporters: [{
        type: 'json',
        subdir: '.',
        file: 'coverage.json',
      }, {
        type: 'lcov',
        subdir: '.'
      }, {
        type: 'text-summary'
      }]
    }
  });
};

这些配置都是什么意思呢?这里挨个说明一下:

  • frameworks: 使用的测试框架,这里依旧是我们熟悉又亲切的Mocha
  • files:测试页面需要加载的资源,上面的test目录下已经没有test.html了,所有需要加载内容都在这里指定,如果是CDN上的资源,直接写URL也可以,不过建议尽可能使用本地资源,这样测试更快而且即使没网也可以测试。这个例子里,第一行载入的是断言库Should.js,第二行是src下的所有代码,第三行载入测试代码
  • preprocessors:配置预处理器,在上面files载入对应的文件前,如果在这里配置了预处理器,会先对文件做处理,然后载入处理结果。这个例子里,需要对src目录下的所有资源添加覆盖率打点(这一步之前是通过gulp-istanbul来做,现在karma-coverage框架可以很方便的处理,也不需要钩子啥的了)。后面做React组件测试时也会在这里使用webpack
  • plugins:安装的插件列表
  • browsers:需要测试的浏览器,这里我们选择了PhantomJS、FireFox、Chrome
  • reporters:需要生成哪些代码报告
  • coverageReporter:覆盖率报告要如何生成,这里我们期望生成和之前一样的报告,包括覆盖率页面、lcov.info、coverage.json、以及命令行里的提示

好了,配置完成,来试试吧,运行./node_modules/karma/bin/karma start --single-run,可以看到如下输出:

run karma

可以看到,Karma首先会在9876端口开启一个本地服务,然后分别启动PhantomJS、FireFox、Chrome去加载这个页面,收集到测试结果信息之后分别输出,这样跨浏览器测试就解决啦。如果要新增浏览器就安装对应的浏览器插件,然后在browsers里指定一下即可,非常灵活方便。

那如果我的mac电脑上没有IE,又想测IE,怎么办呢?可以直接运行./node_modules/karma/bin/karma start启动本地服务器,然后使用其他机器开对应浏览器直接访问本机的9876端口(当然这个端口是可配置的)即可,同样移动端的测试也可以采用这个方法。这个方案兼顾了前两个方案的优点,弥补了其不足,是目前看到最优秀的前端代码测试方案了

React组件测试

去年React旋风一般席卷全球,当然天猫也在技术上紧跟时代脚步。天猫商家端业务已经全面切入React,形成了React组件体系,几乎所有新业务都采用React开发,而老业务也在不断向React迁移。React大红大紫,这里单独拉出来讲一讲React+webpack的打包方案如何进行测试

这里只聊React Web,不聊React Native

事实上天猫目前并未采用webpack打包,而是Gulp+Babel编译React CommonJS代码成AMD模块使用,这是为了能够在新老业务使用上更加灵活,当然也有部分业务采用webpack打包并上线

叕一个煎蛋的栗子

这里创建一个React组件,目录结构大致这样(这里略过CSS相关部分,只要跑通了,集成CSS像PostCSS、Less都没啥问题):

.
├── demo
├── karma.conf.js
├── package.json
├── src
│   └── index.jsx
├── test
│   └── index_spec.jsx
├── webpack.dev.js
└── webpack.pub.js

React组件源码src/index.jsx大概是这个样子:

import React from 'react';
class Welcome extends React.Component {
  constructor() {
    super();
  }
  render() {
    return <div>{this.props.content}</div>;
  }
}
Welcome.displayName = 'Welcome';
Welcome.propTypes = {
  /**
   * content of element
   */
  content: React.PropTypes.string
};
Welcome.defaultProps = {
  content: 'Hello Tmall'
};
module.exports = Welcome;

那么对应的test/index_spec.jsx则大概是这个样子:

import 'should';
import Welcome from '../src/index.jsx';
import ReactDOM from 'react-dom';
import React from 'react';
import TestUtils from 'react-addons-test-utils';
describe('test', function() {
  const container = document.createElement('div');
  document.body.appendChild(container);
  afterEach(() => {
    ReactDOM.unmountComponentAtNode(container);
  });
  it('Hello Tmall', function() {
    let cp = ReactDOM.render(<Welcome/>, container);
    let welcome = TestUtils.findRenderedComponentWithType(cp, Welcome);
    ReactDOM.findDOMNode(welcome).textContent.should.be.eql('Hello Tmall');
  });
});

由于是测试React,自然要使用React的TestUtils,这个工具库提供了不少方便查找节点和组件的方法,最重要的是它提供了模拟事件的API,这可以说是UI测试最重要的一个功能。更多关于TestUtils的使用请参考React官网,这里就不扯了…

代码有了,测试用例也有了,接下就差跑起来了。karma.conf.js肯定就和上面不一样了,首先它要多一个插件karma-webpack,因为我们的React组件是需要webpack打包的,不打包的代码压根就没法运行。另外还需要注意代码覆盖率测试也出现了变化。因为现在多了一层Babel编译,Babel编译ES6、ES7源码生成ES5代码后会产生很多polyfill代码,因此如果对build完成之后的代码做覆盖率测试会包含这些polyfill代码,这样测出来的覆盖率显然是不可靠的,这个问题可以通过isparta-loader来解决。React组件的karma.conf.js大概是这个样子:

'use strict';
const path = require('path');

module.exports = function(config) {
  config.set({
    frameworks: ['mocha'],
    files: [
      './node_modules/phantomjs-polyfill/bind-polyfill.js',
      'test/**/*_spec.jsx'
    ],
    plugins: ['karma-webpack', 'karma-mocha',, 'karma-chrome-launcher', 'karma-firefox-launcher', 'karma-phantomjs-launcher', 'karma-coverage', 'karma-spec-reporter'],
    browsers: ['PhantomJS', 'Firefox', 'Chrome'],
    preprocessors: {
      'test/**/*_spec.jsx': ['webpack']
    },
    reporters: ['spec', 'coverage'],
    coverageReporter: {
      dir: 'coverage',
      reporters: [{
        type: 'json',
        subdir: '.',
        file: 'coverage.json',
      }, {
        type: 'lcov',
        subdir: '.'
      }, {
        type: 'text-summary'
      }]
    },
    webpack: {
      module: {
        loaders: [{
          test: /\.jsx?/,
          loaders: ['babel']
        }],
        preLoaders: [{
          test: /\.jsx?$/,
          include: [path.resolve('src/')],
          loader: 'isparta'
        }]
      }
    },
    webpackMiddleware: {
      noInfo: true
    }
  });
};

这里相对于之前的karma.conf.js,主要有以下几点区别:

  1. 由于webpack的打包功能,我们在测试代码里直接import组件代码,因此不再需要在files里手动引入组件代码
  2. 预处理里面需要对每个测试文件都做webpack打包
  3. 添加webpack编译相关配置,在编译源码时,需要定义preLoaders,并使用isparta-loader做代码覆盖率打点
  4. 添加webpackMiddleware配置,这里noInfo作用是不需要输出webpack编译时那一大串信息

这样配置基本上就完成了,跑一把./node_modules/karma/bin/karma start --single-run

react karma

很好,结果符合预期。open coverage/lcov-report/index.html打开覆盖率页面:

react coverage

鹅妹子音!!!直接对jsx代码做的覆盖率测试!这样React组件的测试大体上就完工了

小结

前端的代码测试主要难度是如何模拟各种各样的浏览器环境,Karma给我们提供了很好地方式,对于本地有的浏览器能自动打开并测试,本地没有的浏览器则提供直接访问的页面。前端尤其是移动端浏览器种类繁多,很难做到完美,但我们可以通过这种方式实现主流浏览器的覆盖,保证每次上线大多数用户没有问题。

持续集成

测试结果有了,接下来就是把这些测试结果接入到持续集成之中。持续集成是一种非常优秀的多人开发实践,通过代码push触发钩子,实现自动运行编译、测试等工作。接入持续集成后,我们的每一次push代码,每个Merge Request都会生成对应的测试结果,项目的其他成员可以很清楚地了解到新代码是否影响了现有的功能,在接入自动告警后,可以在代码提交阶段就快速发现错误,提升开发迭代效率。

持续集成会在每次集成时提供一个几乎空白的虚拟机器,并拷贝用户提交的代码到机器本地,通过读取用户项目下的持续集成配置,自动化的安装环境和依赖,编译和测试完成后生成报告,在一段时间之后释放虚拟机器资源。

开源的持续集成

开源比较出名的持续集成服务当属Travis,而代码覆盖率则通过Coveralls,只要有GitHub账户,就可以很轻松的接入Travis和Coveralls,在网站上勾选了需要持续集成的项目以后,每次代码push就会触发自动化测试。这两个网站在跑完测试以后,会自动生成测试结果的小图片

build result

Travis会读取项目下的travis.yml文件,一个简单的例子:

language: node_js
node_js:
  - "stable"
  - "4.0.0"
  - "5.0.0"
script: "npm run test"
after_script: "npm install coveralls@2.10.0 && cat ./coverage/lcov.info | coveralls"

language定义了运行环境的语言,而对应的node_js可以定义需要在哪几个Node.js版本做测试,比如这里的定义,代表着会分别在最新稳定版、4.0.0、5.0.0版本的Node.js环境下做测试

而script则是测试利用的命令,一般情况下,都应该把自己这个项目开发所需要的命令都写在package.json的scripts里面,比如我们的测试方法./node_modules/karma/bin/karma start --single-run就应当这样写到scripts里:

{
  "scripts": {
    "test": "./node_modules/karma/bin/karma start --single-run"
  }
}

而after_script则是在测试完成之后运行的命令,这里需要上传覆盖率结果到coveralls,只需要安装coveralls库,然后获取lcov.info上传给Coveralls即可

更多配置请参照Travis官网介绍

这样配置后,每次push的结果都可以上Travis和Coveralls看构建和代码覆盖率结果了

travis

coveralls

小结

项目接入持续集成在多人开发同一个仓库时候能起到很大的用途,每次push都能自动触发测试,测试没过会发生告警。如果需求采用Issues+Merge Request来管理,每个需求一个Issue+一个分支,开发完成后提交Merge Request,由项目Owner负责合并,项目质量将更有保障

总结

这里只是前端测试相关知识的一小部分,还有非常多的内容可以深入挖掘,而测试也仅仅是前端流程自动化的一部分。在前端技术快速发展的今天,前端项目不再像当年的刀耕火种一般,越来越多的软件工程经验被集成到前端项目中,前端项目正向工程化、流程化、自动化方向高速奔跑。还有更多优秀的提升开发效率、保证开发质量的自动化方案亟待我们挖掘。


Why You Can’t Trust GPS in China

$
0
0

Why You Can’t Trust GPS in China
by Geoff Manaugh February 26, 2016

One of the most interesting, if unanticipated, side effects of modern copyright law is the practice by which cartographic companies will introduce a fake street—a road, lane, or throughway that does not, in fact, exist on the ground—into their maps. If that street later shows up on a rival company’s products, then they have all the proof they need for a case of copyright infringement. Known as trap streets, these imaginary roads exist purely as figments of an overactive legal imagination.

Trap streets are also compelling evidence that maps don’t always equal the territory. What if not just one random building or street, however, but an entire map is deliberately wrong? This is the strange fate of digital mapping products in China: there, every street, building, and freeway is just slightly off its mark, skewed for reasons of national and economic security.

The result is an almost ghostly slippage between digital maps and the landscapes they document. Lines of traffic snake through the centers of buildings; monuments migrate into the midst of rivers; one’s own position standing in a park or shopping mall appears to be nearly half a kilometer away, as if there is more than one version of you on the loose. Stranger yet, your morning running route didn’t quite go where you thought it did.

It is, in fact, illegal for foreign individuals or organizations to make maps in China without official permission. As stated in the “Surveying and Mapping Law of the People’s Republic of China,” for example, mapping—even casually documenting “the shapes, sizes, space positions, attributes, etc. of man-made surface installations”—is considered a protected activity for reasons of national defense and “progress of the society.” Those who do receive permission must introduce a geographic offset into their products, a kind of preordained cartographic drift. An entire world of spatial glitches is thus deliberately introduced into the resulting map.

The central problem is that most digital maps today rely upon a set of coordinates known as the World Geodetic System 1984, or WGS-84; the U.S. National Geospatial-Intelligence Agency describes it as “the reference frame upon which all geospatial-intelligence is based.” However, as software engineer Dan Dascalescu writes in a Stack Exchange post, digital mapping products in China instead use something called “the GCJ-02 datum.” As he points out, an apparently random algorithmic offset “causes WGS-84 coordinates, such as those coming from a regular GPS chip, to be plotted incorrectly on GCJ-02 maps.” GCJ-02 data are also somewhat oddly known as “Mars Coordinates,” as if describing the geography of another planet. Translations back and forth between these coordinate systems—to bring China back to Earth, so to speak—are easy enough to find online, but they are also rather intimidating to non-specialists.

While algorithmic offsets introduced into digital maps might sound like nothing more than a matter of speculative concern—something more like a dinner conversation for fans of William Gibson novels—it is actually a very concrete issue for digital product designers. Releasing an app, for example, whose location functions do not work in China has immediate and painfully evident user-experience, not to mention financial, implications.

Shanghai China Map
Google Maps
One such app designer posted on the website Stack Overflow to ask about Apple’s “embeddable map viewer.” To make a long story short, when used in China, Apple’s maps are subject to “a varying offset [of] 100-600m which makes annotations display incorrectly on the map.” In other words, everything there—roads, nightclubs, clothing stores—appears to be 100-600 meters away from its actual, terrestrial position. The effect of this is that, if you check the GPS coordinates of your friends, as blogger Jon Pasden writes, “you’ll likely see they’re standing in a river or some place 500 meters away even if they’re standing right next to you.”

The same thread on Stack Overflow goes on to explain that Google also has its own algorithmically derived offset, known as “_applyChinaLocationShift” (or more humorously as “eviltransform”). The key, of course, to offering an accurate app is to account for this Chinese location shift before it ever happens—to distort the distortions before they occur.

In addition to all this, Chinese geographic regulations demand that GPS functions must either be disabled on handheld devices or they must be made to display a similar offset. If a given device—such as a smartphone or camera—detects that it is in China, then its ability to geo-tag photos is either temporarily unavailable or strangely compromised. Once again, you would find that your hotel is not quite where your camera wants it to be, or that the restaurant you and your friends want to visit is not, in fact, where your smartphone thinks it has guided you. Your physical footsteps and your digital tracks no longer align.

It is worth pointing out that this raises interesting geopolitical questions. If a traveler finds herself in, say, Tibet or on a short trip to the artificial islands of the South China Sea—or perhaps simply in Taiwan—are she and her devices really “in China”? This seemingly abstract question might already be answered, without the traveler even knowing that it’s been asked, by circuits inside her phone or camera. Depending on the insistence of China’s territorial claims and the willingness of certain manufacturers to acknowledge those assertions, a device might no longer offer accurate GPS readings.

Put another way, you might not think you’ve crossed an international border—but your devices have. This is just one, relatively small example of how complex geopolitical questions can be embedded in the functionality of our handheld devices: cameras and smartphones are suddenly thrust to the front line of much larger conversations about national sovereignty.

These sorts of examples might sound like inconsequential travelers’ trivia, but for China, at least, cartographers are seen as a security threat: China’s Ministry of Land and Resources recently warned that “the number of foreigners conducting surveys in China is on the rise,” and, indeed, the government is increasingly cracking down on those who flout the mapping laws. Three British geology students discovered this the hard way while “collecting data” on a 2009 field trip through the desert state of Xinjiang, a politically sensitive area in northwest China. The students’ data sets were considered “illegal map-making activities,” and they were fined nearly $3,000.

What remains so oddly compelling here is the uncanny gulf between the world and its representations. In a well-known literary parable called “On Exactitude in Science,” from Collected Fictions, Argentine fabulist Jorge Luis Borges describes a kingdom whose cartographic ambitions ultimately get the best of it. The imperial mapmakers, Borges writes, devised “a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.” This 1:1 map, however, while no doubt artistically and conceptually wondrous, was seen as utterly useless by future generations. Rather than enlighten or educate, this sprawling and inescapable super-map merely smothered the very territory whose connections it sought to clarify.

Mars Coordinates, eviltransform, _applyChinaLocationShift, the “China GPS Offset Problem”—whatever name you want to describe this contemporary digital phenomenon of full-scale digital maps sliding precariously away from their referents, the gap between map and territory is suitably Borgesian.

Indeed, Borges ends his tiniest of parables with an image of animals and beggars living wild amidst the “tattered ruins” of an abandoned map, unaware of what its original purpose might have been—perhaps foreshadowing the possibility that travelers several decades from now will wander amidst remote Chinese landscapes with outdated GPS devices in hand, marveling at their apparent discovery of some parallel, dislocated version of the world that had been hiding in plain view.

Geoff wishes to thank Twitter user @0xdeadbabe for first pointing out “Mars Coordinates” to him. Follow Geoff on Twitter at @bldgblog.

Promoted Stor


Webpack + React 开发之路

$
0
0

杂七杂八的想法

记得大二的时候刚学习 Java,我做的第一个图形化用户界面是一个仿QQ的登录窗口,其实就是一些输入框和按钮,但是记得当时觉得超级有成就感,于是后来开始喜欢上写 Java,还做了很多小游戏像飞机大战、坦克大战啥的,自己还觉得特别有意思。
后来开始学前端,其实想想也是做图形化用户界面,不过是换了一个运行环境而已。但是写着写着发现很不顺手,和用 Java 写感觉很不一样,到底哪不对呢。
用 Java 写界面的时候,按钮是按钮,输入框是输入框,我做登录窗口的时候,只要定义一个登录窗口类,然后设置布局、把按钮、输入框加进去,一个登录窗口就出来了。
反观前端的实现,要写一个登录窗口,得先在 html 里定义结构,在 css 里制定样式,然后在 js 里添加行为,最头疼的是 js 里不仅仅只是这个登录窗口的行为,还有页面初始化的代码、别的按钮的监听等等等等一大堆乱七八糟的代码(作为菜鸟的自我吐槽)
其实我理解的以上问题的关键词就是 组件化 ,之所以以前写的那么别扭,很大程度上是自己带着组件化的思想,但是写不出组件化的代码。

直到现在使用上 React,真是感觉眼前一亮。当然还有很多很多需要学习的地方,就从现在开始,配合着 Webpack,踏上 React 的开发之路吧。

制作一个微博发送表单

下面通过 React 编写一个简单的例子,就是常用的微博发送的表单。

一、新建项目

项目目录如下:

/js
-- /components
---- /Publisher
------ Publish.css
------ Publish.jsx
-- app.js
/css
-- base.css
index.html
webpack.config.js
  • js/components 目录存放所有的组件,比如 Publisher 是我们的表单组件,里面存放这个表单的子组件(如果有的话)、组件的 jsx 文件以及组件自己的样式。
  • js/app.js 是入口文件
  • css 存放全局样式
  • index.html 主页
  • webpack.config.js webpack 的配置文件

二、配置 Webpack

编辑 webpack.config.js

var webpack = require('webpack');

module.exports = {
    entry: './js/app.js',
    output: {
        path: __dirname,
        filename: 'bundle.js'
    },
    module: {
        loaders: [
            {
                test: /\.jsx?$/,
                loader: 'babel',
                query: {
                    presets: ['react', 'es2015']
                }
            },
            {
                test: /\.css$/,
                loader: 'style!css'
            }
        ]
    },
    plugins: [
        new webpack.optimize.UglifyJsPlugin({
            compress: {
                warnings: false
            }
        })
    ]
}

上一篇文章 里是使用 webpack 进行 ES6 开发,其实不管是 ES6 也好,React 也好,webpack 起到的是一个打包器的作用,配置项和这里大致相似,就不再赘述。

不同的是在 babel-loader 里增加了 react 的转码规则。

另外这里使用到了 webpack 的一个内置插件 UglifyJsPlugin,通过他可以对生成的文件进行压缩。详细的介绍请看这里

三、安装一系列东东

首先保证安装了 nodejs 。

1) 初始化项目

npm init

2) 安装 webpack

npm install webpack -g

3) 安装 React

npm install react react-dom --save-dev

4) 安装加载器

本项目使用到的有 babel-loader、css-loader、style-loader。

  • babel-loader 进行转码
  • css-loader 对 css 文件进行打包
  • style-loader 将样式添加进 DOM 中

详细请看这里

npm install babel-loader css-loader style-loader --save-dev

5) 安装转码规则

npm install babel-preset-es2015 babel-preset-react --save-dev

四、码代码

index.html 中,引用的 js 文件是通过 webpack 生成的 bundle.jscss 文件是写在 /css 目录下的 base.css

index.html

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Document</title>
    <link rel="stylesheet" href="css/base.css">
</head>
<body>
    
http://bundle.js </body> </html>

/css/base.css

base.css 里面存放的是全局样式,也就是与组件无关的。

html, body, textarea {
    padding: 0;
    margin: 0;
}

body {
    font: 12px/1.3 'Arial','Microsoft YaHei';
    background: #73a2b0;
}

textarea {
    resize: none;
}

a {
    color: #368da7;
    text-decoration: none;
}

/js/app.js

/js/app.js 是入口文件,引入了 Publisher 组件

import React from 'react';
import ReactDOM from 'react-dom';
import Publisher from './components/Publisher/Publisher.jsx';

ReactDOM.render(
    <Publisher />,
    document.getElementById('container')
);

/js/components/Publisher/Publisher.jsx

好的,下面开始编写组件,首先,确定这个组件的组成部分,因为是一个简单的表单,所以不需要继续划分子组件

表单分为上中下三部分,title 里面包含热门微博和剩余字数的提示,textElDiv 包含输入框,btnWrap 包含发布按钮。

import React from 'react';

class Publisher extends React.Component {
    constructor(...args) {
        super(...args);
    }

    render() {
        return (
            
还可以输入140
</div>
</div> ); } } export default Publisher;

我们暂时通过 className 给组件定义了样式名,但还没有实际写样式代码,因为要保证组件的封装性,所以我们不希望组件的样式编写到全局中去以免影响其他组件,最好像我们的目录划分一样,组件自己的样式跟着组件自己走,而且这个样式不影响其他组件。这里就需要用到 css-loader了。

css-loader 可以将 css 文件进行打包,而且可以对 css 文件里的 局部 className 进行哈希编码。这意味着可以这样写样式文件:

/* xxx.css */

:local(.className) { background: red; }
:local .className { color: green; }
:local(.className .subClass) { color: green; }
:local .className .subClass :global(.global-class-name) { color: blue; }

经过处理之后,则变成:

._23_aKvs-b8bW2Vg3fwHozO { background: red; }
._23_aKvs-b8bW2Vg3fwHozO { color: green; }
._23_aKvs-b8bW2Vg3fwHozO ._13LGdX8RMStbBE9w-t0gZ1 { color: green; }
._23_aKvs-b8bW2Vg3fwHozO ._13LGdX8RMStbBE9w-t0gZ1 .global-class-name { color: blue; }

也就是我们可以在不同的组件样式中定义 .btn 的样式名,但是经过打包之后,在全局里面就被转成了不同的哈希编码,由此解决了 css 全局命名冲突的问题。

关于 css-loader 更详细的使用,请参考这里

那么 Publisher 的样式如下:

/js/components/Publisher/Publisher.css

:local .publisher{
    width: 600px;
    margin: 10px auto;
    background: #ffffff;
    box-shadow: 0 0 2px rgba(0,0,0,0.15);
    border-radius: 2px;
    padding: 15px 10px 10px;
    height: 140px;
    position: relative;
    font-size: 12px;
}

:local .title{
    position: relative;
}

:local .title div {
    position: absolute;
    right: 0;
    top: 2px;
}

:local .tips {
    color: #919191;
    display: none;
}

:local .textElDiv {
    border: 1px #cccccc solid;
    height: 68px;
    margin: 25px 0 0;
    padding: 5px;
    box-shadow: 0px 0px 3px 0px rgba(0,0,0,0.15) inset;
}

:local .textElDiv textarea {
    border: none;
    border: 0px;
    font-size: 14px;
    word-wrap: break-word;
    line-height: 18px;
    overflow-y: auto;
    overflow-x: hidden;
    outline: none;
    background: transparent;
    width: 100%;
    height: 68px;
}

:local .btnWrap {
    float: right;
    padding: 5px 0 0;
}

:local .publishBtn {
    display: inline-block;
    height: 28px;
    line-height: 29px;
    width: 60px;
    font-size: 14px;
    background: #ff8140;
    border: 1px solid #f77c3d;
    border-radius: 2px;
    color: #fff;
    box-shadow: 0px 1px 2px rgba(0,0,0,0.25);
    padding: 0 10px 0 10px;
    text-align: center;
    outline: none;
}

:local .publishBtn.disabled {
    background: #ffc09f;
    color: #fff;
    border: 1px solid #fbbd9e;
    box-shadow: none;
    cursor: default;
}

然后就可以在 Publisher.jsx 中这样使用了

import React from 'react';
import style from './Publisher.css';

class Publisher extends React.Component {
    constructor(...args) {
        super(...args);
    }

    render() {
        return (
            
还可以输入140
</div>
</div> ); } } export default Publisher;

这样组件的样式已经添加进去了,接下来就纯粹是进行 React 开发了。

编写 Publisher.jsx

表单的需求如下:

  1. 输入框获取焦点时,输入框边框变为橙色,右上角显示剩余字数的提示;输入框失去焦点时,输入框边框变为灰色,右上角显示热门微博。
  2. 输入字数小于且等于140字时,提示显示剩余可输入字数;输入字数大于140时,提示显示已经超过字数。
  3. 输入字数大于0且不大于140字时,按钮为亮橙色且可点击,否则为浅橙色且不可点击。

首先,给 textarea 添加 onFocusonBluronChange 事件,通过 handleFocushandleBlurhandleChange 来处理输入框获取焦点、失去焦点和输入。

然后将输入的内容保存在 state 里,这样每当内容发生变化时,就能方便的对变化进行处理。

对于按钮的变化、热门微博和提示之间的转换,根据 state 中内容的变化来切换样式就能轻松地做到。

完整代码如下:

import React from 'react';
import style from './Publisher.css';

class Publisher extends React.Component {
    constructor(...args) {
        super(...args);
        // 定义 state
        this.state = {
            content: ''
        }
    }

    /**
    * 获取焦点
    **/
    handleFocus() {
        // 改变边框颜色
        this.refs.textElDiv.style.borderColor = '#fa7d3c';
        // 切换右上角内容
        this.refs.hot.style.display = 'none';
        this.refs.tips.style.display = 'block';
    }

    /**
    * 失去焦点
    **/
    handleBlur() {
        // 改变边框颜色
        this.refs.textElDiv.style.borderColor = '#cccccc';
        // 切换右上角内容
        this.refs.hot.style.display = 'block';
        this.refs.tips.style.display = 'none';
    }

    /**
    * 输入框内容发生变化
    **/
    handleChange(e) {
        // 改变状态值
        this.setState({
            content: e.target.value
        });
    }

    render() {
        return (
            
{this.state.content.length > 140 ? '已超出' : '还可以输入'}{this.state.content.length > 140 ? this.state.content.length - 140 : 140 - this.state.content.length}
</div>
</div> ); } } export default Publisher;

五、运行

  • 通过 --display-error-detail 可以显示 webpack 出现错误的中间过程,方便在出错时进行查看。
  • --progress --colors 可以显示进度
  • --watch 可以监视文件的变化并在变化后重新加载

运如如下:

webpack --display-error-detail --progress --colors --watch

React.js Conf: The Good Parts

$
0
0

React.js Conf: The Good Parts

I had the amazing opportunity to attend React.js conf in San Fransisco on 22nd/23rd of February thanks to a generous diversity scholarship from Facebook! Two full days of talks from the creators, contributors and users of React.js, and 600+ React enthusiasts from around the world there to take it all in. It was a great chance to meet other React-ers and share ideas in a place where it is completely acceptable to take your laptop out at breakfast or the bar and talk shamelessly about code.

From what I heard, tickets this year were incredibly hard to get and if you didn’t have time to watch the live stream, here’s my summary of ‘React.js Conf: The Good Parts’ and a plethora of links for cool resources I heard about from the talks and talking to other people.

Nick Schrock’s Keynote really set the tone for the conference, highlighting how React has grown from a JavaScript library into its own ecosystem that can fundamentally advance web development and React Native is completely changing the way mobile apps are being built, with cross-stack engineers replacing platform specific roles and teams. He also pointed out some pretty impressive figures — like the fact that the Facebook Ads manager and Groups iOS and Android apps share 85–90% of their React Native code and were built with a single team! Many of the features on the Facebook app are also already in React Native, one being the Facebook Friend’s day video which you might have seen a couple of weeks ago!

Ben Alpert’s talk on ‘Making great React Apps’ touched on several ideas of what still needed to be improved in React and React Native including animations, gestures, rendering fast lists and tools for improving developer experience across React and React Native — like what if you could remove the need for setting up webpack/babel to quickly prototype a new project with just one file e.g. create app.js and just call ‘react run platform=ios’?

The announcement of Draft.js, a rich text editing library for React from Facebook, got everyone pretty excited! Making input text bold, cut, copy, paste, adding custom transformations like the ‘mentions/check-ins’ that can be added for statuses on Facebook — Draft.js makes this all infinitely easier for your React Apps. Isaac Salier-Hellendang’s talk explained how the library takes the good parts of the ‘contentEditable’ browser feature (native cursor and selection behaviour, native input events & key events, all rich text features, automatic autogrowing of elements, accessibility and that it works in all browsers) and applies the principles of React to turn it into a controlled component — like a Text Input field with an onChange event handler and the input value saved to a state. I definitely recommend watching the full talk if you’re interested in the implementation details.

Data handling in React applications was a recurring theme throughout the two days with multiple talks highlighting the many different options out there. Lin Clark’s talk illustrated with her quirky code-cartoons made for a fun and extremely clear introduction to Flux, Redux and Relay.

In short, Flux is out, Relay is a bit too complicated to start with and Redux wins.

But seriously, Redux cuts some of the complexity of flux by using functional composition instead of callback registration, has a single store with immutable state, is super declarative, is great for testing and makes hot reloading and time travel debugging possible. Relay on the other hand requires a GraphQL server which is a beast in itself and takes a lot more set up, but has the additional benefits of being able to handle caching, query optimisation and network errors and the readability that comes from co-location of queries and views. Relay also allows deferred queries (e.g. retrieval of the title and text of an article and comments later) and reduction in the the size of queries — relay retrieves data and puts it in a local cache so some query data can just be retrieved from the cache. Jared Forsyth’s talkintroduced Re-frame and Om/next, both ClojureScript libraries. Re-frame is like Redux but uses subscriptions to define how to get data from state, which can be memoized so subscriptions are reused between components and for those of you familiar with Redux, there’s no need to do mapStateToProps(state){} in the container. Om/next is a Relay like library but without the need for a GraphQL server.

Optimisation and performance improvement were also mentioned repeatedly. Aditya Punjani’s talk about optimising the FlipKart mobile website for 2G connections in India introduced two really interesting ideas: the App Shell architecture instead of traditional server-side rendering (breaking down the app into loading state, with placeholders for data and loaded state, with the loading state being displayed in the first paint of the page) and service workers, a very cool browser API which can be used to intercepts all network requests so you can choose to either retrieve data from cache (for offline use) or from the network.

Bhuwan Khattar suggested some methods for speeding up start up time. Branches in code — e.g. when a/b testing can lead to slow start up times as all the modules for each branch need to be downloaded leading to lots of unneccessary overhead. This is difficult to optimise because the branch chosen might be dependent on runtime data. One of the solutions he suggested was inline requires for lazy execution — i.e. only require things when they are necessary rather than requiring all the modules at the top of the file. Bhuwan also suggested using a helper function ‘matchRoute’ which does pattern matching based on the route name and only conditionally downloads and executes the code for each route. The example code below is from this great article by Jan Pojer about routing (specifically with Relay and react-router).

Another solution used by Facebook is to use a wrapper instead of ‘require’ — use the wrapper to require dependencies that are not needed on initial render — these are downloaded as necessary with a loading indicator being shown in the meantime.

In Tadeu Zagallo’s talk on ‘Optimising React Native’ he showed how to profile apps using the simulator in Xcode — bring up the developer menu inside the simulator (Cmd-Z) and click ‘start profiling’ and then view in chrome. This brings up a handy menu in Chrome so you can see which functions take the longest to run. He also mentioned two things to remember: always add component keys for lists — React’s DOM diffing algorithm uses the keys to check which items need re-rendering so add keys to make sure all the list items are not re-rendered and try to always use the ‘shouldComponentUpdate’ lifecycle method to prevent re-render unless necessary.

One of the most exciting things for me was hearing about the new Navigation API in React-Native from Eric Vicenti. After struggling with the Navigator component for months, the new declarative version sounds like a much better solution, borrowing heavily from Redux to remove the state from within the component and using actions for transitioning between views. The new changes should also support deep linking with URIS, a feature thats highly desirable in mobile apps. I still haven’t had a look at the new version properly but it’s now available as ‘NavigationExperimental’ in the latest release of React Native.

Leland Richardson’s talk about testing React Native was also super exciting! Not only has he created a library ‘Enzyme’ to help with traversing the DOM tree when shallow rendering React, he’s only just gone and created a complete mock of the entire React Native API! Enzyme has shallow, mount and render methods as well as methods to find nodes of a specifc type in the tree. And he’s also created some handy examples of how to use both libraries (links at the end)!

Jamison Dance’s talk has really made me want to try the Elm language! I’d only ever heard of Elm so it was completely new to me but in short, Elm is a functional programming language that transpiles to JavaScript and runs in the browser. It has a static type system (so no run time errors!) and there’s no ‘null’!. Elm only has stateless functions and only immutable data. Jamison showed how Elm applications have a similar tree architecture to React apps with parent components passing data down to child components which then respond to user interactions by sending data back to their parents which then update the top level app state and cause the App to re-render. Updating of state in React can be done in a number of ways including different libraries discussed earlier like Redux or Flux, but in Elm there’s a built in system for updating state using Observables. The tradeoff is between constraints and guidelines — if you trust the language designers to have made good decisions for you, it eliminates the need to try and decide between different libraries for your application and you can focus on the problems specific to your application domain. Jamison suggested that learning Elm could help you become a better React Developer and I think I’m going to give it a go!

There were also some cool tips/ideas from the lightning talks:
* A way of making React Native code more reusable between iOS and Android — create a wrapper for components that uses Platform.OS to check the operating system
* Nuclide IDE support for react-native to make the developer experience just like the web. There’s now a react-native-debugger, react-native-inspector and the ability to add breakpoints!
* React Native for web! A way to build truly platform agnostic code, enable universal rendering and it comes with built in web accesibility!

And then there were some of the wackier talks on Virtual Reality , Open-GL effects and making an arduino-raspberry-pi-React powered version of Jeopardy!!

Overall the conference was an amazing experience and my list of ‘new things to learn’ has now grown completely out of hand!

Links

Libraries/APIs

For React:
* Draft.js — Rich Text Editing with React
* Email templating using React — Oy-Vey
* Gatsbyjs static site generator using React + Markdown
* Open GL for React!
* gl-react
* gl-react-dom
* gl-react-inspector
* GL Sandbox
* Falcor — Data fetching library by Netflix
* Cycle.js — data flow architecture based on observables
* Enzyme — JavaScript Testing utility for React that mimicks jQuery’s API for DOM traversal
* Guide for using Enzyme with Webpack

For React Native
* NavigationExperimental — new declarative Navigator API
* Cordova plugins for React Native
* Open GL for React Native — gl-react-native
* React Native Web
* A complete mock of React Native
* Guide for testing React Native with Enzyme
* Example React Native tests with Enzyme

Random…
* Service Workers to support offline experiences, push notifications and loads more
* Push Notifications using Google Cloud Messaging
* Chrome api for speech recognition
* Webpack plugin to install modules from ‘import’ statements (thanks Eric Clemmons!)
* API Archive of Jeopardy Questions!!
* Track.js — Monitor and report JS errors in web applications
* Raygun.io — Crash reporting

Developer Tools

* React Native plugin for Visual Studio Code
* Deco IDE for React Native
* Nuclide with React Native including react-native-debugger, ability to add breakpoints and react native inspector, flow support
* HockeyApp — distribute beta versions of apps without using the app store

Explanations/Tutorials

https://github.com/nikhilaravi/reactconf2016


Running Mocha + Istanbul + Babel

$
0
0

http://stackoverflow.com/questions/33621079/running-mocha-istanbul-babel

Using Babel 6.x, let’s say we have file test/pad.spec.js:

import pad from '../src/assets/js/helpers/pad';
import assert from 'assert';

describe('pad', () => {
  it('should pad a string', () => {
    assert.equal(pad('foo', 4), '0foo');
  });
});

Install a bunch of crap:

$ npm install babel-istanbul babel-cli babel-preset-es2015 mocha

Create a .babelrc:

{
  "presets": ["es2015"]
}

Run the tests:

$ node_modules/.bin/babel-node node_modules/.bin/babel-istanbul cover \
node_modules/.bin/_mocha -- test/pad.spec.js


  pad
     should pad a string


  1 passing (8ms)

=============================================================================
Writing coverage object [/Volumes/alien/projects/forked/react-flux-puzzle/coverage/coverage.json]
Writing coverage reports at [/Volumes/alien/projects/forked/react-flux-puzzle/coverage]
=============================================================================

=============================== Coverage summary ===============================
Statements   : 100% ( 4/4 )
Branches     : 66.67% ( 4/6 ), 1 ignored
Functions    : 100% ( 1/1 )
Lines        : 100% ( 3/3 )
================================================================================

UPDATE: I’ve had success using nyc (which consumes istanbul) instead of istanbul/babel-istanbul. This is somewhat less complicated. To try it:

Install stuff (you can remove babel-istanbul and babel-cli):

$ npm install babel-core babel-preset-es2015 mocha nyc

Create .babelrc as above.

Execute this:

$ node_modules/.bin/nyc --require babel-core/register node_modules/.bin/mocha \
test/pad.spec.js

…which should give you similar results. By default, it puts coverage info into .nyc-output/, and prints a nice text summary in the console.

Note: You can remove node_modules/.bin/ from any of these commands when placing the command in package.json‘s scripts field.


Developing React Native Android Apps with Linux

$
0
0

Developing React Native Android Apps with Linux

Posted by | October 19, 2015 | Apps, Blog, Programming | 4 Comments

React Native is Facebook’s open source framework for building native applications on iOS and Android. It achieves this by providing a common developer experience so that a developer learns one set of tools and can apply it to both platforms. It takes the component approach used by React and applies it to the mobile app world.

As part of our research and development work here at Black Pepper, we’ve been investigating how we can make use of React Native to write Android and iOS apps for our customers. Because we predominantly use Linux as our development environment, I ran into a few issues along the way and this guide will hopefully help others avoid the same pitfalls.

Why do we need this React Native guide?

Facebook have provided a great deal of information about React Native on github, however because it’s early days for React Native, and Facebook use OS X for development, getting up and running on Linux isn’t as straightforward as it should be. Additionally, the Android version of React Native hasn’t been publicly available for very long, so there are fewer resources available.

Approach

I’m a big fan of using Docker to isolate tools, particularly when testing out something new. It also means you can get a new development up and running in no time; simply pull down the docker images you need. So to get up and running with React Native, I followed the Getting Started guide but added everything to a Dockerfile as I went.

The Docker image is created in such a way that it will have access to a directory on the host machine for code storage, allowing you to use whichever editor or IDE you prefer, along with any other day to day tools such as git. Additionally I made use of privileged mode so that the docker container will be able to access the USB ports of the host, so the React Native app can be run on a physical device, rather than just in the emulator.

Prerequisites

You’ll need to have installed Docker and familiarised yourself with how it works if you want to try this out.
You’ll also need to have familiarised yourself with Android app development, React and React Native.

Installation

Currently the Dockerfile only exists in my personal github repo but I’m in the process of moving it to the Black Pepper repoand from there I’ll publish the image so you’ll be able to retrieve it with a simple “docker pull”.

However, for the time being you can obtain and build the image as follows:

git clone https://github.com/gilesp/docker.git
docker build -t react-native docker/react_native

There are a couple of shell script files in the repo, the most interesting ones are “react-native.sh” and “react_bash.sh” (The inconsistency in naming is one of the things I need to sort out before pushing everything to the Black Pepper repos). Add these shell scripts to your path, as follows:

ln -s path/to/dockerrepo/react_native/react-native.sh bin/react-native
ln -s path/to/dockerrepo/react_native/react_bash.sh bin/react-bash

Usage

Note: The shell scripts assume that your current working directory is where you’ll be storing the react native project and so they map them as a volume in the docker container.

Starting a new project

react-native init AwesomeProject

This will create a directory called AwesomeProject in your current working directory and populate it with the React Native app infrastructure.

Running your project

Plug in an android device to your machine via USB.

Now, start a shell in the Docker container

cd AwesomeProject
react-bash

Now we need to create a reverse tcp connection with adb, allowing the app on the phone to communicate with the nodejs server running in the container:

adb reverse tcp:8081 tcp:8081

And finally, start the server and deploy the app to your device:

react-native start > react-start.log 2>&1 &
react-native run-android

Note: The run-android command should start the server automatically but I found it to be unreliable, hence why I manually start it first.

If all goes well, then the application will launch on your device. Shake the phone to open the developer menu and turn on Reload JS. Now, whenever you make changes to the react native source code (such as editing index.android.js), the changes will instantly appear on the device.

Summary

Working with React Native under Linux turned out to be relatively painless, although there are still some issues I’d like to resolve with the docker image. The main one is that I have, so far, been unable to get the emulator to work correctly, due to issues with 64-bit support. I can run an emulator on the host machine and have react native use that (in the same way as it would use a real device via USB), but it’d be good to remove that dependency and have it entirely containerised.

Other than that though, it’s entirely feasible to develop Android apps using React Native under Linux.



fir.im Weekly –不能错过的 GitHub Top 100 开源库

$
0
0

好的工具&资源,会带来更多的灵感。本期 fir.im Weekly 精选了一些实用的 iOS,Android 的使用工具和源码分享,还有前端、UI方面的干货。一起来看下:)

Swift 开源项目精选

@SwiftLanguage分享。

“基于《Swift 语言指南》开源项目收录,做了一个甄别、筛选,并辅以一句话介绍。来源 GitHub: ”Github 的 Swift 库已尽收眼底,简洁明了,还在不断更新中正在学习 Swift 的同学不要错过–>>Swift 开源项目精选.

xcbuild – Facebook 出品的开源 App 构建工具

xcbuild 是 Facebook 出品的开源 App 构建工具,能够为 App 构建过程与多平台运行提供更快构建、更好文档并兼容Xcode。Github 地址–> https://github.com/facebook/xcbuild .

Swift 烧脑体操

@唐巧_boy 出了一系列的【Swift 烧脑体操】的文章,文如题目,涨姿势必备,文章列表如下:

Swift 烧脑体操(一) – Optional 的嵌套

Swift 烧脑体操(二) – 函数的参数

Swift 烧脑体操(三) – 高阶函数

Swift 烧脑体操(四) – map 和 flatMap

GitHub Top 100的Android&iOS开源库

作者@G军仔整理了一份旨在帮助 Android 初学者快速入门以及找到适合自己学习的资料, GitHub 地址:Android_Data ,@李锦发 之前也整理了iOS版, GitHub 地址:trip-to-iOS.

Injection for Xcode:成吨的提高开发效率

@没故事的卓同学强烈推荐一个Xcode高端必备插件:Injection Plugin for Xcode.不用重新启动应用就可以让修改的代码生效。更多好玩的功能,点击这里

盘点分析 Android N 的新特性

Android N 预览版来啦!支持 Java8 了,支持多窗口了,支持更多新特性了! @代码家连夜写了一篇从开发者角度解析 Andorid N 的文章,感兴趣点击这里.

Android界面性能调优手册

界面是 Android 应用中直接影响用户体验最关键的部分。如果代码实现得不好,界面容易发生卡顿且导致应用占用大量内存。@Vince蔡培培 整理了自己的经验和分享,详情请点击这里

Android APK终极瘦身21招

@移动开发前线分享。

作者@冯建V前不久写过一篇《APK瘦身实践》,在公司的要求下,将6.5M的Apk硬生生的减到不到4M(已开启minifyEnabled等常规压缩手段),后面他根据反馈又整理出这篇Apk瘦身指南,对Android开发者更具指导意义。

文章传送门.

ZFPlayer视频播放器 源码

@任子丰写的视频播放器——ZFPlayer,基于AVPlayer,支持横屏、竖屏(全屏播放还可锁定屏幕方向),上下滑动调节音量、屏幕亮度,左右滑动调节播放进度等等,ZFPlayer荣登当日github排行榜。Github 地址:https://github.com/renzifeng/ZFPlayer

WaveLoadingView – 圆形波浪进度指示器类

开发者@潜艇_刘智艺Zzz 将 WaveLoadingView 圆形波浪进度指示器开源在Github 上,配置参数丰富点击这里查看。

JSPatch – APP 动态更新服务平台

@bang 分享的JSPatch 平台,现在开放注册。可以实时修复 iOS App 线上 bug,一键让你的 APP 拥有动态运营能力。地址见:http://jspatch.com/ .

 BugHD for JavaScript – 轻松收集前端 Error

从收集 APP 崩溃信息到全面收集网站出现的 Error, BugHD 变得更加强大。前端 er 们不用再面对 一堆 Bug 愁容满面,可以来这里看看。

Admire.so – 一个设计资源导航网站

Admire.so 钦慕网,是一个设计资源导航网站,还有一些前端er 会用到的资源。每天会添加一个新的链接,为你的创意、你的设计多一些灵感。

_
这期的 fir.im Weekly 就到这里,欢迎大家分享更多的资


Orange Pi One Board Quick Start Guide with Armbian Debian based Linux Distribution Read more: http://www.cnx-software.com/2016/03/16/orange-pi-one-board-quick-start-guide-with-armbian-debian-based-linux-distribution/#ixzz435iNNggG

$
0
0

Orange Pi One board is the most cost-effective development board available on the market today, so I decided to purchase one sample on Aliexpress to try out the firmware, which has not always been perfect simply because Shenzhen Xunlong focuses on hardware design and manufacturing, and spends little time on software development to keep costs low, so the latter mostly relies on the community. Recently, armbian has become popular operating systems for Linux ARM platform in recent months, so I’ve decided to write a getting started guide for Orange Pi One using a Debian Desktop image released by armbian community.

Orange Pi One Unboxing

But let’s start by checking out what I received. The Orange Pi One board is kept in an anti-static bag, and comes with a Regulatory Compliance and Safety Information sheet, but no guide, as instead the company simply asks users to visit http://www.orangepi.org to access information to use their boards.

Click to Enlarge

The top of the board have the most interesting bits with Ethernet, micro USB and USB ports, HDMI port, micro SD slot, power jack, a power button, the 40-pin “Raspberry Pi” compatible header, Allwinner H3 processor and one Samsung RAM chip. The 3-pin serial console header can be found right next (under in the pic) to the RJ45 jack.

Click to Enlarge

The bottom of the board features another Samsung RAM chip (512MB in total), and the camera interface.

Click to Enlarge

I’ve also taken a picture to compare Orange Pi One dimensions to the ones of Orange Pi 2 mini, Raspberry Pi 2, and Raspberry Pi Zero.

Click to Enlarge

By the way, while the official prices for Raspberry Pi ($5), Orange Pi One ($9.99), and C.H.I.P ($9) are a little different, I ended up paying about the same for all three boards once shipping is included: £9.04 (or about $12.77) for Raspberry Pi Zero, $13.38 for Orange Pi One, and $14.22 for C.H.I.P (Cyber Monday deal for “$8”). C.H.I.P computer is not shown in the picture above simply because I have not received it yet. The performance of Orange Pi One will be much greater than the other thanks to its quad core processor as discussed on Raspberry Pi Zero, C.H.I.P and Orange Pi One comparison.

Installing and Setting Up Armbian on Orange Pi One

While the company claims your can download firmware on Orange Pi Download page, they have not published a firmware image specifically for Orange Pi One, and while you could probably use an Orange Pi PC image (this may mess up with regulator), I’ve never heard anyone ever praise Shenzhen Xunlong for the quality of the images they’ve released, quite the contrary.  While Orange Pi community member Loboris released several images for Allwinner H3 boards, he does not seem to have updated them for Orange Pi One, and I’ve heard a lot about armbian distribution recently based on Debian and targeting ARM Linux boards, so that’s the image I’m going to try.

You can currently download Debian Jessie server or desktop based on Linux 3.4 legacy kernel, but once the Ethernet driver gets into Linux mainline (aka Vanilla), you’ll be able to run the latest Linux mainline on Orange Pi One, at least for headless operation.

First you’ll need to get yourself a 8GB or greater micro SD card preferably with good performance (Class 10 or better), and use a Windows, Mac OS and Linux computer to download and flash the firmware image.

I’ve done so in Ubuntu 14.04. Once you insert the micro SD card into the computer, you may want to located the SD card with lsblk:

I used a 32GB class 10 micro SD card, and in my case the device is /dev/sdb. I’m going to use the command line, but you can use ImageWriter for Ubuntu or Windows, as well as some other tools for Mac OS. Let’s download the firmware, extract it, and flash it to the micro SD card (replace /dev/sdX by your own device):

Now insert the micro SD card into Orange Pi One, and connect all necessary cables and accessories. I connected HDMI and Ethernet cables, a RF dongle for an air mouse, a USB OTG adapter for a USB flash drive, the serial debug board, and the power supply. Please note that the micro USB port cannot be used to power the board, so you’ll either need to purchase the power adapter, or an inexpensive USB to 4.0/1.7mm power jack adapter to use with a 5V/2A USB poweradapter.

Orange_Pi_One_Power_Supply_Connections

As you connect the power supply, the red LED should lit, and after a few seconds, you should see the kernel log on the HDMI TV or monitor. I also access the serial console via a UART debug board, but it will only show the very beginning, and once the framebuffer is setup most message are redirected to the monitor. This is what I got for the first boot in the serial console:

But I got many error messages on the TV reading “[cpu_freq] ERR: set cpu frequency top 1296MHz failed!”. Those are actually normal because a single firmware image is used for all Orange Pi Allwinner H3 boards, and they use different regulators. The message will disappear subsequently once the system will have detected an Orange Pi One.

Orange_Pi_One_cpu_freq_Error_MessageYou may have to be patient the first few minutes of the very first boot (2 to 3 minutes) as you see the error messages above looping seemingly forever, as the system is resizing the root file system partition, creating a 128Mb emergency swap area, creating the SSH key, and updating some packages. Once this is all done, the system will reboot, and you’ll be asked to change the root password, create a new user, and adjust the resolution with h3disp utility which will automatically patch script.bin file in the FAT32 boot partition of your micro SD card. The default credentials are root with password 1234.

Welcome screen and new user creation after changing root password

H3Disp options

H3disp utility allows you to choosen the resolution and refresh rate of your system, and I select 1080p50, and rebooted the board one last time, and after about 20 seconds, I could get to the Debian XFCE desktop.

Click for Original Size

The resolution of the desktop is indeed 1920×1080, Ethernet is working, but my keyboard layout does not match as the default layout is for Slovenian language. I went to Settings->Keyboard to change that.

Orange_Pi_One_layout

And it seemed to work randomly as I sometimes got a QWERTY keyboard, but other times it would revert to a QWERTZ keyboard, and I’m not sure why. Following the instructions on armbian documentation using:

did not completely solve my issue either at first, but it seems to be fine now…

I’ve also noticed some permissions issues starting with the network which requires sudo for ping and iperf, and likely to CONFIG_ANDROID_PARANOID setting in the kernel configuration. My USB flash drive was also not automatically mounted, and I had to use the sudo to mount the drive manually too.

Most people will also likely need to change the timezone with:

Orange_Pi_One_Terminal

Let’s check some parameters with the command line:

The system is running sunxi Linux 3.4.110 kernel, and Debian 8. The processor max frequency is set to 1.2 GHz as it should be, the GPIOs appear to be supported just like in Orange Pi 2 mini (but less I/Os are shown), total RAM is 494MB, and 2.1GB is used out of the 29GB root partition in the micro SD card. I know some ARM boards can’t be powered off properly, but it’s not the case with Orange Pi One as I could turn it off cleanly with the power LED turning off at the end of the shutdown process.

That’s all for this guide, and I’ll showcase 3D graphics and video hardware decoding in a separate post. You can get further by checking out Armbian Orange Pi One page, following the instructions to build your own Armbian image, and browsing Orange Pi One thread in armbian forums.

Read more: http://www.cnx-software.com/2016/03/16/orange-pi-one-board-quick-start-guide-with-armbian-debian-based-linux-distribution/#ixzz435i7LyeN


一步一步实现iOS微信自动抢红包(非越狱)

$
0
0

一步一步实现iOS微信自动抢红包(非越狱)

字数2219 阅读17396 评论128
微信红包

前言:最近笔者在研究iOS逆向工程,顺便拿微信来练手,在非越狱手机上实现了微信自动抢红包的功能。

题外话:此教程是一篇严肃的学术探讨类文章,仅仅用于学习研究,也请读者不要用于商业或其他非法途径上,笔者一概不负责哟~~

好了,接下来可以进入正题了!

此教程所需要的工具/文件


是的,想要实现在非越狱iPhone上达到自动抢红包的目的,工具用的可能是有点多(工欲善其事必先利其器^_^)。不过,没关系,大家可以按照教程的步骤一步一步来执行,不清楚的步骤可以重复实验,毕竟天上不会掉馅饼嘛。

解密微信可执行文件(Mach-O)


因为从Appstore下载安装的应用都是加密过的,所以我们需要用一些工具来为下载的App解密,俗称砸壳。这样才能便于后面分析App的代码结构。

首先我们需要一台已经越狱的iPhone手机(现在市面上越狱已经很成熟,具体越狱方法这里就不介绍了)。然后进入Cydia,安装OpenSSHCycriptiFile(调试程序时可以方便地查看日志文件)这三款软件。

PS:笔者的手机是iPhone 6Plus,系统版本为iOS9.1。

在电脑上用iTunes上下载一个最新的微信,笔者当时下载的微信版本为6.3.13。下载完后,iTunes上会显示出已下载的app。

iTunes

连上iPhone,用iTunes装上刚刚下载的微信应用。

打开Mac的终端,用ssh进入连上的iPhone(确保iPhone和Mac在同一个网段,笔者iPhone的IP地址为192.168.8.54)。OpenSSH的root密码默认为alpine

ssh

接下来就是需要找到微信的Bundle id了,,这里笔者有一个小技巧,我们可以把iPhone上的所有App都关掉,唯独保留微信,然后输入命令 ps -e

微信bundle id

这样我们就找到了微信的可执行文件Wechat的具体路径了。接下来我们需要用Cycript找出微信的Documents的路径,输入命令cycript -p WeChat

cycript
  • 编译dumpdecrypted
    先记下刚刚我们获取到的两个路径(Bundle和Documents),这时候我们就要开始用dumpdecrypted来为微信二进制文件(WeChat)砸壳了。
    确保我们从Github上下载了最新的dumpdecrypted源码,进入dumpdecrypted源码的目录,编译dumpdecrypted.dylib,命令如下:
dumpdecrypted.dylib

这样我们可以看到dumpdecrypted目录下生成了一个dumpdecrypted.dylib的文件。

  • scp
    拷贝dumpdecrypted.dylib到iPhone上,这里我们用到scp命令.
    scp 源文件路径 目标文件路径 。具体如下:
scp
  • 开始砸壳
    dumpdecrypted.dylib的具体用法是:DYLD_INSERT_LIBRARIES=/PathFrom/dumpdecrypted.dylib /PathTo
dumpdecrypted

这样就代表砸壳成功了,当前目录下会生成砸壳后的文件,即WeChat.decrypted。同样用scp命令把WeChat.decrypted文件拷贝到电脑上,接下来我们要正式的dump微信的可执行文件了。

dump微信可执行文件


  • 从Github上下载最新的class-dump源代码,然后用Xcode编译即可生成class-dump(这里比较简单,笔者就不详细说明了)。
  • 导出微信的头文件
    使用class-dump命令,把刚刚砸壳后的WeChat.decrypted,导出其中的头文件。./class-dump -s -S -H ./WeChat.decrypted -o ./header6.3-arm64
导出的头文件

这里我们可以新建一个Xcode项目,把刚刚导出的头文件加到新建的项目中,这样便于查找微信的相关代码。

微信的头文件

找到CMessageMgr.hWCRedEnvelopesLogicMgr.h这两文件,其中我们注意到有这两个方法:- (void)AsyncOnAddMsg:(id)arg1 MsgWrap:(id)arg2;- (void)OpenRedEnvelopesRequest:(id)arg1;。没错,接下来我们就是要利用这两个方法来实现微信自动抢红包功能。其实现原理是,通过hook微信的新消息函数,我们判断是否为红包消息,如果是,我们就调用微信的打开红包方法。这样就能达到自动抢红包的目的了。哈哈,是不是很简单,我们一起来看看具体是怎么实现的吧。

  • 新建一个dylib工程,因为Xcode默认不支持生成dylib,所以我们需要下载iOSOpenDev,安装完成后(Xcode7环境会提示安装iOSOpenDev失败,请参考iOSOpenDev安装问题),重新打开Xcode,在新建项目的选项中即可看到iOSOpenDev选项了。
iOSOpenDev
  • dylib代码
    选择Cocoa Touch Library,这样我们就新建了一个dylib工程了,我们命名为autoGetRedEnv。

    删除autoGetRedEnv.h文件,修改autoGetRedEnv.m为autoGetRedEnv.mm,然后在项目中加入CaptainHook.h

    因为微信不会主动来加载我们的hook代码,所以我们需要把hook逻辑写到构造函数中。

    __attribute__((constructor)) static void entry()
    {
      //具体hook方法
    }

    hook微信的AsyncOnAddMsg: MsgWrap:方法,实现方法如下:

    //声明CMessageMgr类
    CHDeclareClass(CMessageMgr);
    CHMethod(2, void, CMessageMgr, AsyncOnAddMsg, id, arg1, MsgWrap, id, arg2)
    {
      //调用原来的AsyncOnAddMsg:MsgWrap:方法
      CHSuper(2, CMessageMgr, AsyncOnAddMsg, arg1, MsgWrap, arg2);
      //具体抢红包逻辑
      //...
      //调用原生的打开红包的方法
      //注意这里必须为给objc_msgSend的第三个参数声明为NSMutableDictionary,不然调用objc_msgSend时,不会触发打开红包的方法
      ((void (*)(id, SEL, NSMutableDictionary*))objc_msgSend)(logicMgr, @selector(OpenRedEnvelopesRequest:), params);
    }
    __attribute__((constructor)) static void entry()
    {
      //加载CMessageMgr类
      CHLoadLateClass(CMessageMgr);
      //hook AsyncOnAddMsg:MsgWrap:方法
      CHClassHook(2, CMessageMgr, AsyncOnAddMsg, MsgWrap);
    }

    项目的全部代码,笔者已放入Github中。

    完成好具体实现逻辑后,就可以顺利生成dylib了。

重新打包微信App


  • 为微信可执行文件注入dylib
    要想微信应用运行后,能执行我们的代码,首先需要微信加入我们的dylib,这里我们用到一个dylib注入神器:yololib,从网上下载源代码,编译后得到yololib。

    使用yololib简单的执行下面一句就可以成功完成注入。注入之前我们先把之前保存的WeChat.decrypted重命名为WeChat,即已砸完壳的可执行文件。
    ./yololib 目标可执行文件 需注入的dylib
    注入成功后即可见到如下信息:

    dylib注入
  • 新建Entitlements.plist
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
      <key>application-identifier</key>
      <string>123456.com.autogetredenv.demo</string>
      <key>com.apple.developer.team-identifier</key>
      <string>123456</string>
      <key>get-task-allow</key>
      <true/>
      <key>keychain-access-groups</key>
      <array>
          <string>123456.com.autogetredenv.demo</string>
      </array>
    </dict>
    </plist>

    这里大家也许不清楚自己的证书Teamid及其他信息,没关系,笔者这里有一个小窍门,大家可以找到之前用开发者证书或企业证书打包过的App(例如叫Demo),然后在终端中输入以下命令即可找到相关信息,命令如下:
    ./ldid -e ./Demo.app/demo

  • 给微信重新签名
    接下来把我们生成的dylib(libautoGetRedEnv.dylib)、刚刚注入dylib的WeChat、以及embedded.mobileprovision文件(可以在之前打包过的App中找到)拷贝到WeChat.app中。

    命令格式:codesign -f -s 证书名字 目标文件

    PS:证书名字可以在钥匙串中找到

    分别用codesign命令来为微信中的相关文件签名,具体实现如下:

    重新签名
  • 打包成ipa
    给微信重新签名后,我们就可以用xcrun来生成ipa了,具体实现如下:
    xcrun -sdk iphoneos PackageApplication -v WeChat.app -o ~/WeChat.ipa

安装拥有抢红包功能的微信


以上步骤如果都成功实现的话,那么真的就是万事俱备,只欠东风了~~~

我们可以使用iTools工具,来为iPhone(此iPhone Device id需加入证书中)安装改良过的微信了。

iTools

大工告成!!


好了,我们可以看看hook过的微信抢红包效果了~

自动抢红包

哈哈,是不是觉得很爽啊,”妈妈再也不用担心我抢红包了。”。大家如果有兴趣可以继续hook微信的其他函数,这样既加强了学习,又满足了自己的特(zhuang)殊(bi)需求嘛。

教程中所涉及到的工具及源代码笔者都上传到Github上。
Github地址

特别鸣谢:
1.iOS冰与火之歌(作者:蒸米)
2.iOS应用逆向工程


3·15晚会报道的无人机是怎么被劫持的?

$
0
0

3·15晚会报道的无人机是怎么被劫持的?

2016-03-2570186人围观,发现19个不明物体无线安全终端安全

0×00 背景

在2015年GeekPwn的开场项目中,笔者利用一系列漏洞成功演示劫持了一架正在飞行的大疆精灵3代无人机,夺取了这台无人机的控制权,完成了可能是全球首次对大疆无人机的劫持和完整控制。GeekPwn结束后,组委会立即将漏洞通知给官方,而大疆也很快完成了漏洞的修复。今年的3月15号,大疆发布了全新一代的精灵4代无人机,精灵3代从此退居二线;同时央视315晚会也对去年GeekPwn的这个劫持项目进行了详细的报道。

考虑到这些漏洞的修复已经过了足够长的时间,我们决定公开漏洞的完整细节和利用流程,希望能为国内的方兴未艾的射频安全研究圈子贡献自己的一份力量。

本文争取以零基础的角度对整个发现和利用过程抽丝剥茧,并尽量详细阐述这个过程中涉及的技术细节。本文涉及的技术细节适用大疆精灵3代,2代和1代,不适用最新的精灵4代无人机。由于行文时间仓促,如有疏漏敬请斧正。

0×01 攻击场景讨论:风险真实存在但可控

可能是因为近两年无人机的曝光率颇高,去年GeekPwn上完成无人机劫持项目后感兴趣的电视台和媒体并不少,也引发了普通群众的讨论和担心。

虽然我们已经证明并演示了精灵系列无人机是可以被劫持和完整控制的,但想要在实际环境中的直接将公园、景区、街道上空飞行的无人机据为己有,信号增益和劫持后的稳定控制仍然是需要深入研究的问题。或许在官方遥控器上加载自己的万能遥控器ROM,然后直接借用官方遥控器的信号增益和控制系统,会是一个可行的方案。

此外,造成劫持的漏洞已经得到合理的修复,新版ROM发布也已经超过4个月。随着安全研究者的攻防研究以及官方的重视,实际能攻击的精灵无人机也会越来越少。

所以,我们的结论是,普通群众不用过于担忧无人机的安全问题,反而应该更关注越来越多的走入普通人家的智能设备的安全问题。顺便提一下,这块我们团队亦有关注(比如同样是参加了GeekPwn 2015和央视315晚会的烤箱和POS机),后续还会有更多的研究成果放出。

好了,现在开始我们的无人机劫持之旅。

0×02 抽丝剥茧:精灵系列遥控原理全解析

0×0200 射频硬件初探

要黑掉无人机,第一步要做的是信息收集。我们先来了解一下精灵3代所使用的射频硬件。

图1 拆开的精灵3代遥控器(左图)和无人机主机(右图)

左翻右翻,经过了一系列艰难的电焊拆解和吹风机刮除保护膜后,终于找到了负责射频通信的芯片和负责逻辑的主控芯片,并识别出了它们的型号。看得出来大疆对电路板刻意做了一些防拆解和信息保护。

从下面的图中能识别出来,主控芯片选择的是知名大厂NXP的LPC1765系列,120MHz主频,支持USB 2.0,和射频芯片使用SPI接口进行通讯。而射频芯片则是国产BEKEN的BK5811系列,工作频率为5.725GHz – 5.852GHz或者5.135GHz – 5.262GHz,共有125个频点,1MHz间隔,支持FSK/GFSK调制解调,使用ShockBurst封包格式,并且支持跳频,使用经典的SPI接口进行控制。

图2 主控芯片

图3 射频芯片

而这个参数强大的国产射频芯片激起了我们的兴趣,经过一些挖掘,发现这个芯片原来山寨自NORDIC的nRF24L01+,没错,就是这个号称性价比之王的nRF24L01+ 2.4GHz射频芯片的5.8GHz版本,更有意思的是这两个不同厂家芯片的datasheet中绝大部分内容都是通用的。

通过这些基本的硬件信息确定了射频的频段后,我们马上拿出HackRF在gqrx中观察5.8GHz的信号。看着瀑布图(下图4)中跳来跳去的小黄线,我们意识到精灵3的射频通讯应该是跳频的,而在不知道跳频序列的情况下,无法对射频信号进行完整解调。此时HackRF的射频分析基本上派不上用处,唯有通过逻辑分析仪来看看射频芯片是如何跳频的。

图4 使用gqrx观察射频信号

0×0201 不得已的控制逻辑追踪

从上一节获得的硬件信息中,我们已经知道主控芯片和射频芯片之间是采用SPI接口进行通讯和控制的,因此只要从BK5811的引脚中找到SPI需要的那四个引脚,连上逻辑分析仪,对这四个引脚的电位变化进行采样分析,我们就能看到主控芯片是如何控制射频芯片的跳频了。

0×020100 SPI接口定义 

SPI协议本身其实挺简单的,在CS信号为低电位时,SCK通过脉冲控制通讯的时钟频率,每个时钟周期里,SI为输入,SO为输出,通过SI和SO在每个时钟里高低电位的切换构成一个bit,每八个时钟周期构成一个字节,从而形成一个连续的字节流,一个字节流代表一个命令,由射频芯片的datasheet约定好。SPI协议通讯示意图如下所示,其中四个引脚分别为:

    SO(MISO):主设备数据输出,从设备数据输入。
SI(MOSI):主设备数据输入,从设备数据输出。
SCK(CLK):时钟信号,由主设备产生。
CS(CSN):从设备使能信号,由主设备控制。

图5 SPI协议通讯示意图

0×020101 连接逻辑分析仪 

通过BK5811的datasheet,我们定位到了SPI通信的那几个引脚(如图6),通过万用表确认引脚连通性,然后在可以电焊的地方通过飞线连上逻辑分析仪的测试钩,折腾了很久总算连上了(如图7)。

 

图6 BK5811中SPI引脚定义

图7 通过电焊和飞线将BK5811的SPI引脚连上逻辑分析仪

随后,从逻辑分析仪中,我们得到了作为安全人员来说最喜欢的二进制数据流。

0×020102 射频芯片控制命令解析 

在BK5811的datasheet中,明确定义了它所支持的每一条SPI命令。通过连续的电位变化传过来一个完整的SPI命令如下所示:

图8 逻辑分析仪中的一个SPI命令

其中0×30是命令号,高3位代表操作是写BK5811的寄存器,而寄存器id由这个字节中的低5位决定,是0×10,而0×10代表写的内容是ShockBurst的发送地址(类似以太网的mac地址)。而后面五字节(0×11 0×22 0×33 0×44 0×19)则是发送地址本身。

0×020103 跳频逻辑总结 

通过一段时间的观察,我们发现SPI命令颇为简单,为了方便观察大量命令的序列,我们按照datasheet中的定义写了一个解析脚本,在脚本的帮助下终于整理清楚了跳频的流程。

图9 SPI命令解析脚本

在大疆的定义下,完整的跳频序列有16个频点,这些频点在遥控器和无人机主机配对(一般发生在出厂前)时通过随机产生,一旦确定后就存储固定起来,除非手动重新配对。

遥控器打开后,会以7ms的周期,按照跳频序列指定的顺序来变化射频发射的频率,16次(112ms)一个循环,而在每一个周期内,发射一次遥控的控制数据。一个典型的SPI命令序列如:<跳频> 1ms <发包> 6ms

图10 遥控器SPI命令数字逻辑示意图

对于无人机主机,则是以1ms的周期来变化接收信号的频率,一旦收到来自遥控器的射频信号(BK5811会使用上文所说的发送和接收地址来识别通过),则转而进入7ms的周期,和遥控器保持同步。一旦信号丢失,马上又恢复1ms的跳频周期。一个典型的SPI命令序列如:<跳频> <查包> 1ms <查包> 1ms <查包> 1ms <查包> 1ms <查包> 1ms <查包> 1ms <查包>。

图11 无人机主机SPI命令数字逻辑示意图

从上面的分析我们能注意到,遥控器只负责发送数据,无人机主机只负责接收数据,两者之间并无射频上的交互。这为我们后面覆盖遥控器的信号打好了基础。

0×0202 模拟信号到数字信号的鸿沟

在搞清楚遥控的工作流程后,我们知道是可以对其进行完全的模拟(先假设射频序列已知),创造出一个以假乱真的遥控来。但在加工二进制命令前,如何完成二进制命令中数字化的数据和真实世界中连续的电磁波之间的转换困扰了我们很久,笔者甚至很长一段时间都在想重回大学修满通信专业的科目。

0×020200 电磁波和GFSK制式的基本原理 

先补一点从学通信的同事那里偷师回来的基本常识。

电磁波在我们的世界中连续的传播,通过特定的方式可以使其携带二进制信息,这个方式称为调制解调。发送数据时,一般是将的调制好的基带信号(含二进制信息)和载波信号叠加后进行发送,通常基带信号的频率会比载波信号频率低很多,如BK5811的载波信号频率在5.8GHz左右,但基带信号的频率仅为2MHz。而接收方通过解调和滤波,将基带信号从接收到的载波信号中分离出来,随后进行采样和A/D转换得到二进制数据。

FSK(Frequency-shift keying)是一种经典的基于频率的调制解调方式,其传递数据的方式也很简单。例如约定500KHz代表0,而1000KHz代表1,并且以1ms作为采样周期,如果某1ms内基带信号的频率是500KHz,这表明这是一个0,而如果下1ms内基带信号的频率为1000KHz,那表明下一位二进制比特是1。简单来说,FSK制式就是通过这样连续的电磁波来连续的传递二进制数据。

图12 FSK调制解调示意图

而GFSK制式仅仅是在FSK制式的基础上,在调制之前通过一个高斯低通滤波器来限制信号的频谱宽度,以此来提升信号的传播性能。

0×020201 GFSK解调和IQ解调 

在理解了GFSK制式的原理后,接下来我们尝试在HackRF的上写出GFSK解调脚本,从一段遥控实际发出的电磁波中提取二进制数据(如下图13)。需要注意的是HackRF收发的射频数据另外采用了IQ调制解调,代码上也需要简单处理一下。

图13 在空中传播的GFSK电磁波(IQ制式)

由于没有找到现成的解调代码,只好在MATLAB上(如下图14)摸爬滚打了许久,并恶补了许多通信基础知识,折腾出(如下图15)GFSK解调脚本,并成功模拟遥控器的跳频逻辑,能够像无人机那样获取每一次跳频的数据。至此, 我们再次得到了作为安全人员来说最喜欢的二进制数据流。

图14 MATLAB中模拟GFSK解调

图15 GFSK解调脚本工作图

0×020202 遥控控制数据总结 

经过分析,一条典型的遥控控制数据如下(图16)所示(最新版本固件和稍旧版本的固件协议,格式略有不同):

图16 两种类型的遥控控制数据

最开始的5个字节为发送方的ShockBurst地址,用于给无人机验证是不是配对的遥控器。

接下来的26字节为遥控数据本身(上下,左右,油门,刹车等遥控器上的一切操作),我们详细来讲解下。

遥控器上的控制杆的一个方向(如上+下,左+右)由12bit来表示。如表示左右方向及力度的数值power_lr由上数据的第5个字节和第6个字节的低4位决定,控制杆居中时power_lr为0×400(1024),控制杆拉至最左时power_lr为0x16C(364),而拉至最右时power_lr为0×694(1684)。也就是说,遥控器可以将控制杆左和右,力度可分为660级,并在控制数据中占用12bit传输给无人机主机,主机针对不同的力度执行不同的飞行行为。

图17 遥控控制数据解析代码片段

其他遥控控制杆的数据也非常类似,故不再赘述。值得注意的是,所有26字节的遥控控制数据是一次性的发给无人机的,故上下,左右,前进后退,油门刹车等所有行为都是并行无干扰的。这也是无人机遥控性能指标中经常说的支持6路信号,12路信号的含义。

控制数据中最后的1个字节位CRC8校验位(旧版是CRC16),是前面的31字节的CRC8/CRC16校验结果,校验错误的数据将被抛弃。

0×0203 遥控器和无人机通讯逻辑总结

通过以上漫长的分析过程,我们总算完全搞懂了在遥控器上拨动控制杆的行为,是如何一步步反馈到无人机的飞控程序来完成对应的飞行行为。简单整理下:

a) 遥控器和无人机开机后,遥控器负责发送数据,无人机负责接收数据。它们通过共同的跳频序列的高速跳频来保持一个数据链路,链路故障有一定能力能迅速恢复。

b) 无人机每7ms就会收到一次遥控器发出的32字节控制数据,控制数据只有一条命令一种格式,所有控制杆和开关的状态会一次性发送到无人机。无人机收到数据后会进行地址校验和CRC校验,确保数据是正确无误的。

c) 用户在操纵遥控器的过程中,操控的行为和力度都会在7ms内通过那32字节控制数据反馈至无人机,接着由无人机的飞控程序来完成对应的飞行行为。

0×03 各个击破:完全控制无人机

从遥控器的通讯逻辑来看,想要通过HackRF这类SDR设备覆盖遥控器发出的射频数据来劫持无人机。必须解决以下几个问题:

a) 虽然通过HackRF来收发GFSK数据已经没有问题,但不知道跳频序列根本无法和无人机保持同步。

b) 如何打断遥控器原本和无人机之间的稳定射频链路,并同时建立和无人机之间新的稳定链路。

c) 大疆遥控器的射频功率做了大量优化,有效控制距离达一公里,HackRF的射频频率难以企及。

下面我们来看看如何逐个击破这几个问题。

0×0300 伪造遥控器:信道的信息泄漏漏洞

在通过脚本对遥控器信号进行GFSK解调时,我们发现了BK5811芯片一个奇怪的现象:芯片在某个频道发送数据时,会同时向临近的特定频道发送同样内容数据内容。举个例子来说,同在+7ms这一时刻,除了会向13号频道发送属于这个频道的数据外,还会向其他一些特定的频道发送原本属于13号频道的数据。

    + 7ms: Channel 13,
+ 7ms: Channel 09,
+ 7ms: Channel 21,

这个奇怪的现象虽然不会影响射频的功能,只是多了一些冗余数据,但却成了我们得到遥控器跳频序列的突破点,实实在在的构成了一个信息泄露漏洞。

我们可以通过脚本,从5725MHz到5850MHz进行遍历,每次隔1MHz,刚好覆盖BK5811的每一个频道。遍历监听时,考虑单个频点的情况,我们能得到冗余数据(假设监听61号频道)如下:

    + 0ms: Channel 61,
+ 7ms: Channel 13,
+ 21ms: Channel 09,
+ 112ms: Channel 61,

因为我们已经明确112ms是一次跳频序列的循环,那么从冗余数据中我们可以推论:

    ch61 + 1 Step(7ms) = ch13
ch13 + 3 Step(21ms) = ch09
ch09 + 12 Step(84ms) = ch61

换成文字结论即是:如果61号频道是跳频序列的第1个,那么13号频道是第2个,9号频道是第4个,一个一个频道的去遍历,就可以把这个序列补充完整。实际遍历时我们发现,HackRF脚本仅需要30到120秒,不需要遍历全部127个频道,即可推论和补齐完整的16个频点及跳频序列(如下图所示)。

图18 HackRF脚本遍历后得到完整的跳频序列

通过这个特殊的信息泄露漏洞,配合遥控器的调频规律可快速得到跳频序列,但我们也不清楚为什么BK5811芯片会存在这样的信息泄露漏洞。随后我们拿nRF24L01+也做了类似的测试,发现nRF24L01+也同样会产生同样的问题。

0×0301 劫持无人机:信号覆盖漏洞

下面来看看信号覆盖的问题如何解决。有个关键的前提是遥控器只发数据,无人机只收数据,它们之间没有交互。

在之前进行逻辑分析的时候我们发现,不管无人机是1ms跳频一次还是7ms跳频一次,它实际上只会接收跳频完毕后最早发给它的合法数据包。正常情况下可能是跳频完毕后的第5ms时,收到了遥控器发过来的数据,再下一次跳频后的5ms时,再收到遥控器发过来的下一次数据。

那如果我们能一直早于遥控器发出数据,无人机岂不是就直接使用我们的数据了?确实是这样的。假设我们的控制脚本中设置为6ms跳频,我们很快能夺取无人机的控制权(7次跳频内)。但遥控器也会夺回控制权,最终就会出现无人机有1/7的数据来自遥控,6/7的来自黑客的局面。

这其实是一场信号争夺战,那么有没有办法让无人机更稳定的更稳定接收我们的信号呢?如果我们把跳频时间设置为 6.9ms,跳频后每隔0.4ms(Arduino UNO R3的速度极限)发送一次遥控控制数据的话,虽然夺取无人机控制权需要更长的时间(约10s),但一旦获得控制权,在0.4ms发送一次数据的高刷新率覆盖之下,遥控器基本没可能夺回控制权。

图19 伪造遥控器的SPI命令数字逻辑

至此,劫持无人机的基本技术问题已经通过一个信息泄漏漏洞和一个信号覆盖漏洞解决了。

0×0302 稳定性 & 可用性优化

在实现控制脚本的过程中,HackRF存在的两个严重限制:一方面HackRF使用USB通讯接口决定了它的通讯延迟巨大(指令延迟约为30ms),上文中动辄0.4ms的控制精度HackRF做不到;另外一方面,HackRF在5.8GHz频段的信号衰减严重(信号强度仅为遥控器的1%,可能是通用天线在高频段增益偏低),估计只有在贴着无人机射频芯片的情况下才有作用。天线问题故无法使用HackRF劫持无人机。

灵机一动,我们想到了和遥控器类似的做法:通过Arduino UNO R3单片机平台来操作BK5811芯片,直接在Arduino上实现我们的控制逻辑。当然,再加一个某宝上淘的有源信号放大器,如下图所示。根据测试,有效控制范围为10米左右。

图20 无人机劫持模块全家福

最终,通过了漫长的分析和各种漏洞利用方法的尝试后,我们完成了对大疆无人机的劫持。通过HackRF遍历和监听,然后将序列输入到Arduino中,在Arduino中完成对无人机信号的劫持,最后来通过Arduino上连接的简易开关来控制无人机。控制效果可以参看这个央视315中的视频片段。

0×04 后记:攻是单点突破,防是系统工程 

从漏洞分析和利用的过程来看,大疆在设计无人机和射频协议时确实考虑了安全性的问题,其中跳频机制虽然很大程度上提升了协议的破解难度,但却被过度的依赖。笔者和团队长期从事腾讯产品的漏洞研究工作,深知如所有其他漏洞攻防场景一样,分散而孤立的防御机制跟本无法抵御黑客的突破或绕过,指望一个完美的系统来抵御黑客,如同指望马奇诺防线来抵御德国军队的入侵一样不现实。而更现实情况是攻和守的不对称,攻击者利用单点的突破,逐层的推进,往往会领先防御者一大截。

防御者就无计可施了吗?当然不是。聪明的防御者一定懂得两个系统性的思路:未知攻焉知防和借力。一方面防守者必须是优秀的攻击者,才有可能嗅得到真正攻击者的蛛丝马迹,才有可能在关键节点上部署符合实际情况;另外一方面防守者必须借助自己是在企业内部这一优势和业务并肩作战,利用业务的资源和数据这些攻击者拿不到的资源,配合对攻击的理解,建立对攻击者来说不对称的防御系统。

另外一个层面,智能硬件行业各个厂商对安全的重视令人堪忧。作为无人机行业绝对第一的大疆,尚且存在严重的安全问题,更不要说其他公司——笔者和TSRC物联网安全研究团队近两年业余时间对智能硬件安全的研究也印证了这个结论。二进制漏洞的复杂性和门槛决定了这种漏洞类型很少有机会出现在公众的视野中,但在更隐晦的地下,二进制漏洞攻击者的力量正在以防御者无法企及的速度悄然成长。也许等到阿西莫夫笔下《机械公敌》中的机器人社会形态形成时,我们要面对的不是人工智能的进化和变异,而是漏洞攻击者这种新时代的恐怖分子。

最后,感谢我有一把刷子、zhuliang、泉哥、lake2在整个破解过程中的支持。

0×05 相关链接

[1]http://v.qq.com/iframe/player.html?vid=m0019do4elt&width=670&height=502.5&auto=0
[2] http://2015.geekpwn.org/
[3] http://www.dji.com/cn/newsroom/news/dji-statement-15mar
[4] http://www.bekencorp.com/Botong.Asp?Parent_id=2&Class_id=8&Id=14
[5] https://github.com/mossmann/hackrf
[6] https://www.arduino.cc/en/Main/ArduinoBoardUno
[7] https://github.com/JiaoXianjun/BTLE
[8] http://blog.kismetwireless.net/2013/08/playing-with-hackrf-keyfobs.html

*本文来自腾讯安全应急响应中心(TSRC)投稿,作者Gmxp系腾讯安全平台部终端安全团队负责人。原文链接:security.tencent.com,转载须注明原文链接及出处。


愚人节技术不愚人~React Native开发技术周报Issue#05

$
0
0

愚人节技术不愚人~React Native开发技术周报Issue#05

说在前面的话:React Native开发技术周报,主要会涉及React Native最新资讯,技术开发文章,开源项目,工具,视频等等。今天是我们的第五期,同时各位朋友有优秀的有关React Native技术开发文章可以发给我。

React Native交流3群:496508742

 ().资讯

1.FaceBook发布适用于React Native开发的SDK包(目前只适配Android平台)

大家在使用React  Native开发的过程中可以使用FaceBook SDK库,轻松集成社会化分享,登录,应用分析等API

2.个人发布Mac桌面版本开发框架

React Native Desktop 可以让你用 React Native 技术构建 OS X 下的桌面应用程序。难道真心要准备通吃了?嘎嘎

3.React Native Horse Push热更新平台

来自深圳金马 root#68xg.com,强烈推荐的热更新开发平台。

().术文

1.饿了么在移动O2O应用React Native的技术实践

该React Native分享主要是基于之前做的饿了么商家招聘配送员和兼职平台的IOS应用的经验,目的是帮助对React Native感兴趣的同学了解React Native目前发展的情况,还有你如果想选择React Native进行移动端开发,这个过程中将会遇到哪些坑,迈过哪些坎儿。该文章中介绍了很多饿了么在这方面实践中遇到的很多问题以及解决方案,还是非常不错的~

2.React生命周期和props & state

话说如果需要了解React Native中方面运行生命周期以及相关属性,只需要看React相关即可,该文章很好的图文并茂的讲解了,相关声明周期内容以及props,state内容

3.React Native 中组件的生命周期

看完上文还不过瘾?OK,再来一篇React Native组件生命周期总结的文章。

4.ECMAScript6十大特性

React Native从0.18版本开始,写法已经更新成了ES6规范了。所以该ES6相关特性还非常值得一看

5.抛开 React 学习 React 第一部分

看完本文你能学到什么? 当你第一部分和第二部分都学习完之后,你也许就会知道你为什么需要 React 以及 Redux 类似的 state container (状态管理器)。

6.React&React Native初次体验

7.来自饿了么React-Native蜂鸟客户端实践内容

上海蜂鸟团队,去返利网分享React  Native实践内容

8.看Facebook是如何优化React Native性能

该文来自FaceBook官方博客,React Native 允许我们运用 React 和 Relay 提供的声明式的编程模型,写JavaScript来构建我们的 iOS 和 Android 的应用。这样的做法使得我们的代码更精简,更容易理解和阅读,这些代码还可以在多个平台共享。我们也可以加快迭代速度(因为在开发时不用等待漫长的编译。使用React Native,我们可以发布更快,打磨更多细节,让应用运行的更流畅。这其中优化性能是我们工作的一大重要部分,接下来讲述 Facebook 如何使应用性能足足提升两倍的故事~

9.探究 React Native 中 Props 驱动的 SVG 动画和 Value 驱动动画

React Native 作为一个复用前端思想的移动开发框架,并没有完整实现CSS,而是使用JavaScript来给应用添加样式。这是一个有争议的决定,可以参考这个幻灯片来了解 Facebook 做的理由。自然,在动画上,因为缺少大量的 CSS 属性,React Naive 中的动画均为 JavaScript 动画,即通过 JavaScript 代码控制图像的各种参数值的变化,从而产生时间轴上的动画效果。

10.初窥基于 react-art 库的 React Native SVG

art是一个旨在多浏览器兼容的Node style CommonJS模块。在它的基础上,Facebook又开发了react-art ,封装art,使之可以被react.js所使用,即实现了前端的svg库。然而,考虑到react.js的JSX语法,已经支持将<cirle> <svg>等等svg标签直接插入到dom中(当然此时使用的就不是react-art库了)此外还有HTML canvas的存在,因此,在前端上,react-art并非不可替代。

11.多React Native项目时依赖管理的最佳实践

本文很好的讲解了多依赖管理的最佳实践

在实际开发过程中,经常需要同时运行和修改多个React Native工程,比如运行github上的开源项目以观察某种控件的实际效果。那么此时,各项目下的初始化(npm install)就会非常的痛苦,因为React Native的文件非常大,以0.17.0为例,安装后达到309MB。尽管,我们可以通过阿里npm等镜像站的方式加速下载的过程,但是下载后的进一步编译也非常地耗时。

12.React Native基础之Linking Libraries链接库配置-适配iOS开发

iOS React Native开发中,静态库配置详解方法

13.React native配置后,一直’Installing react-native package from npm…’,时间过程无反应解决方案

这个问题由于没有科学上网或者网络情况问题,经常出现。虽然一般来讲正常网络也需要几分钟或者十几分钟才能搞定。不过经过本文的方法讲解,很快哦~

14.React Native for Android 热部署图片自定义方案

热部署时,我们期望升级包中包含js代码与图片资源。bundle的热部署网上已经有两种方案了,一种是用反射,一种是利用RN自带函数,将bundle初始化时直接放到指定目录下,之后通过替换bundle文件实现代码热部署。我们希望图片也可以实现热部署,本文是一个比较简单的解决方案。

15.干货:React Native 代码分离打包最佳实践

其中困扰很久的一个问题就是代码的分离:rn提供的打包机制将业务代码和RN的lib代码打包到一个文件里,固然没错;仔细看了下,我的业务文件压缩之后的只有370K左右,lib包大小就520K,每次更新代码都白白下载520k?下载速度影响不说,费轱辘呀~~

本文详细介绍了分离打包的一种解决方案。

16.干货:QQ控件ReactNative For Android 项目实战总结

Android Qzone 6.1版本在情侣空间涉水React Native,以动态插件方式将情侣空间进行React Native的改造。在情侣空间基础上,Android Qzone 6.2版本以融合的方式将话题圈进行React Native改造。本文主要讲述话题圈的开发改造流程,相关数据对比及性能优化。

17.进行CodePush进行React Native热更新技术文章(英文原版)

如果大家准备采用CodePush做热更新的话,可以关注一下这篇文章

18.React Native Animated动画详解

最近ReactNative(以下简称RN)在前端的热度越来越高,不少同学开始在业务中尝试使用RN,这里着重介绍一下RN中动画的使用与实现原理。

19.知乎经典讨论帖:React Native有什么优势?能跟原生比么?

知乎上面,各大开发者,牛人对于React Native优势以及和原生开发的对比做了很热烈的讨论,相信看完会有一些体会的

20.react-native 组件间通信

该文章虽然已经写了很长时间了,不过也很好的介绍了React Native组件间通信的相关方法

21.React-Native痛点解析之开发环境搭建及扩展

本文为《React Native入门与实战》的作者之一,魅族高级研发经理魏晓军来为我们解析RN开发中的痛点。本文分享的是在环境搭建和扩展中会遇到的问题与解决方案。

22.ReactNative增量升级方案

当修改了代码或者图片的时候,只要app使用新的bundle文件和assets文件夹,就完成了一次在线升级。本文主要讲解增量升级的解决方案。

().开源

1.[译]React Native开源SQLite数据库组件(react-native-sqlite-storage)

该为进行移植SQLite,封装成适用于React Native Android、iOS平台插件

2.React Native开源项目-CNode论坛客户端

整体效果还可以的,不过暂时只是适配iOS平台

3.每天都实战一个React-Native项目

初学者从基础开始入门实战项目,把学习的组件知识点慢慢的串联在一起,还是非常有用的

4.React Native开源iOS图表组件(react-native-ios-charts)

该组件针对React Native进行封装,基于iOS Charts开源库重新封装,适配于iOS平台.该组件封装了一些常用的图表例如:Bar,Line,Scatter,Combined, Pie, Candle, Bubble等等

5.React Native开源图片选择器组件(react-native-android-imagepicker)

该组件进行封装系统图片的React Native图片选择器组件,当前只是适配Android平台

6.ReactNative重写的OSChina的git客户端

亲测,体验效果很不错

().工具

1.全新的开发React Native的DECO IDE工具

看官方介绍好像很牛逼的样子,不过现在还没有开放下载,大家可以先关注着吧

2.淘宝 NPM 镜像

作为在墙内的童鞋们,进行安装npm的时候经常因为网络问题加载不成功,这边提供国内淘宝镜像,助大家一臂之力,速度非常的快哦~

3.Siphon构建工具

要开发React Native For iOS一定要使用Mac  OS X,一定要安装Xcode?No No,我来告诉你方法:使用Siphon工具,可以不需要安装Xcode IDE进行构建和发布React Native应用

尊重原创,未经授权不得转载:From 江清清的技术专栏(http://www.lcode.org) 侵权必究!

 


国内各地图API坐标系统比较与转换

$
0
0

国内各地图API坐标系统比较与转换

备注:资料均来源与网上,这里稍加整理,有错欢迎指出

一、各个坐标系的概况

        众所周知地球是一个不规则椭圆体,GIS中的坐标系定义由基准面和地图投影两组参数确定,而基准面的定义则由特定椭球体及其对应的转换参数确定。 基准面是利用特定椭球体对特定地区地球表面的逼近,因此每个国家或地区均有各自的基准面。基准面是在椭球体基础上建立的,椭球体可以对应多个基准面,而基准面只能对应一个椭球体。意思就是无论是谷歌地图、搜搜地图还是高德地图、百度地图区别只是针对不同的大地地理坐标系标准制作的经纬度,不存在准不准的问题,大家都是准的只是参照物或者说是标准不一样。谷歌地图采用的是WGS84地理坐标系(中国范围除外),谷歌中国地图和搜搜中国地图采用的是GCJ02地理坐标系,百度采用的是BD09坐标系,而设备一般包含GPS芯片或者北斗芯片获取的经纬度为WGS84地理坐标系,为什么不统一用WGS84地理坐标系这就是国家地理测绘总局对于出版地图的要求,出版地图必须符合GCJ02坐标系标准了,也就是国家规定不能直接使用WGS84地理坐标系。所以定位大家感觉不准确很多又叫出版地图为火星地图其实只是坐标系不一样而已。这就是为什么设备采集的经纬度在地图上显示的时候经常有很大的偏差,远远超出民用GPS 10米偏移量的技术规范。

以上参考自:haotsp.com

总结:

WGS84坐标系:即地球坐标系,国际上通用的坐标系。

GCJ02坐标系:即火星坐标系,WGS84坐标系经加密后的坐标系。

BD09坐标系:即百度坐标系,GCJ02坐标系经加密后的坐标系。

搜狗坐标系、图吧坐标系等,估计也是在GCJ02基础上加密而成的。

 

二、各个地图API采用的坐标系

API 坐标系
百度地图API 百度坐标
腾讯搜搜地图API 火星坐标
搜狐搜狗地图API 搜狗坐标*
阿里云地图API 火星坐标
图吧MapBar地图API 图吧坐标
高德MapABC地图API 火星坐标
灵图51ditu地图API 火星坐标

注1:百度地图使用百度坐标,支持从地球坐标和火星坐标导入成百度坐标,但无法导出。并且批量坐标转换一次只能转换20个(待验证)。

注2:搜狗地图支持直接显示地球坐标,支持地球坐标、火星坐标、百度坐标导入成搜狗坐标,同样,搜狗坐标也无法导出。

个人认为:采用自家坐标体系,而不采用国内通用的火星坐标体系,实在是自寻短处。当然,百度是因为做的足够大、足够好,所以很霸道,也为以后一统天下而不让别人瓜分之而做准备吧。搜狗虽然用自家坐标体系,但能将地球坐标直接导入,此举也属唯一。而图吧地图不知道学什么加密方式,以前用地球坐标用的好好的,现在用图吧自己的坐标,难道是因为给百度做过所以也来了这么一招?或者沿用百度?不得而知。

本文的目的在于:做地图开发的时候,不希望被一家地图API迁就,所以采用火星坐标是正确的选择,希望本文能够对选择使用谁家API的开发者提供一点帮助吧。就我个人而言,我绝不会使用非火星坐标系统的地图API,虽然百度地图API很好很强大确实很吸引我。

以上参考自:http://rovertang.com/labs/map-compare/

三、各个坐标系的相互转换

1.火星坐标系 (GCJ-02) 与百度坐标系 (BD-09) 的转换算法,其中 bd_encrypt 将 GCJ-02 坐标转换成 BD-09 坐标, bd_decrypt 反之。

  1. void bd_encrypt(double gg_lat, double gg_lon, double &bd_lat, double &bd_lon)
  2. {
  3.     double x = gg_lon, y = gg_lat;
  4.     double z = sqrt(x * x + y * y) + 0.00002 * sin(y * x_pi);
  5.     double theta = atan2(y, x) + 0.000003 * cos(x * x_pi);
  6.     bd_lon = z * cos(theta) + 0.0065;
  7.     bd_lat = z * sin(theta) + 0.006;
  8. }
  9. void bd_decrypt(double bd_lat, double bd_lon, double &gg_lat, double &gg_lon)
  10. {
  11.     double x = bd_lon – 0.0065, y = bd_lat – 0.006;
  12.     double z = sqrt(x * x + y * y) – 0.00002 * sin(y * x_pi);
  13.     double theta = atan2(y, x) – 0.000003 * cos(x * x_pi);
  14.     gg_lon = z * cos(theta);
  15.     gg_lat = z * sin(theta);
  16. }
  1. void bd_encrypt(double gg_lat, double gg_lon, double &bd_lat, double &bd_lon)
  2. {
  3.     double x = gg_lon, y = gg_lat;
  4.     double z = sqrt(x * x + y * y) + 0.00002 * sin(y * x_pi);
  5.     double theta = atan2(y, x) + 0.000003 * cos(x * x_pi);
  6.     bd_lon = z * cos(theta) + 0.0065;
  7.     bd_lat = z * sin(theta) + 0.006;
  8. }
  9. void bd_decrypt(double bd_lat, double bd_lon, double &gg_lat, double &gg_lon)
  10. {
  11.     double x = bd_lon – 0.0065, y = bd_lat – 0.006;
  12.     double z = sqrt(x * x + y * y) – 0.00002 * sin(y * x_pi);
  13.     double theta = atan2(y, x) – 0.000003 * cos(x * x_pi);
  14.     gg_lon = z * cos(theta);
  15.     gg_lat = z * sin(theta);
  16. }

2.地球坐标系 (WGS-84) 到火星坐标系 (GCJ-02) 的转换算法WGS-84 到 GCJ-02 的转换(即 GPS 加偏)算法

  1. using System;
  2. namespace Navi
  3. {
  4.     class EvilTransform
  5.     {
  6.         const double pi = 3.14159265358979324;
  7.         //
  8.         // Krasovsky 1940
  9.         //
  10.         // a = 6378245.0, 1/f = 298.3
  11.         // b = a * (1 – f)
  12.         // ee = (a^2 – b^2) / a^2;
  13.         const double a = 6378245.0;
  14.         const double ee = 0.00669342162296594323;
  15.         //
  16.         // World Geodetic System ==> Mars Geodetic System
  17.         public static void transform(double wgLat, double wgLon, out double mgLat, out double mgLon)
  18.         {
  19.             if (outOfChina(wgLat, wgLon))
  20.             {
  21.                 mgLat = wgLat;
  22.                 mgLon = wgLon;
  23.                 return;
  24.             }
  25.             double dLat = transformLat(wgLon – 105.0, wgLat – 35.0);
  26.             double dLon = transformLon(wgLon – 105.0, wgLat – 35.0);
  27.             double radLat = wgLat / 180.0 * pi;
  28.             double magic = Math.Sin(radLat);
  29.             magic = 1 – ee * magic * magic;
  30.             double sqrtMagic = Math.Sqrt(magic);
  31.             dLat = (dLat * 180.0) / ((a * (1 – ee)) / (magic * sqrtMagic) * pi);
  32.             dLon = (dLon * 180.0) / (a / sqrtMagic * Math.Cos(radLat) * pi);
  33.             mgLat = wgLat + dLat;
  34.             mgLon = wgLon + dLon;
  35.         }
  36.         static bool outOfChina(double lat, double lon)
  37.         {
  38.             if (lon < 72.004 || lon > 137.8347)
  39.                 return true;
  40.             if (lat < 0.8293 || lat > 55.8271)
  41.                 return true;
  42.             return false;
  43.         }
  44.         static double transformLat(double x, double y)
  45.         {
  46.             double ret = –100.0 + 2.0 * x + 3.0 * y + 0.2 * y * y + 0.1 * x * y + 0.2 * Math.Sqrt(Math.Abs(x));
  47.             ret += (20.0 * Math.Sin(6.0 * x * pi) + 20.0 * Math.Sin(2.0 * x * pi)) * 2.0 / 3.0;
  48.             ret += (20.0 * Math.Sin(y * pi) + 40.0 * Math.Sin(y / 3.0 * pi)) * 2.0 / 3.0;
  49.             ret += (160.0 * Math.Sin(y / 12.0 * pi) + 320 * Math.Sin(y * pi / 30.0)) * 2.0 / 3.0;
  50.             return ret;
  51.         }
  52.         static double transformLon(double x, double y)
  53.         {
  54.             double ret = 300.0 + x + 2.0 * y + 0.1 * x * x + 0.1 * x * y + 0.1 * Math.Sqrt(Math.Abs(x));
  55.             ret += (20.0 * Math.Sin(6.0 * x * pi) + 20.0 * Math.Sin(2.0 * x * pi)) * 2.0 / 3.0;
  56.             ret += (20.0 * Math.Sin(x * pi) + 40.0 * Math.Sin(x / 3.0 * pi)) * 2.0 / 3.0;
  57.             ret += (150.0 * Math.Sin(x / 12.0 * pi) + 300.0 * Math.Sin(x / 30.0 * pi)) * 2.0 / 3.0;
  58.             return ret;
  59.         }
  60.     }
  61. }
  1. using System;
  2. namespace Navi
  3. {
  4.     class EvilTransform
  5.     {
  6.         const double pi = 3.14159265358979324;
  7.         //
  8.         // Krasovsky 1940
  9.         //
  10.         // a = 6378245.0, 1/f = 298.3
  11.         // b = a * (1 – f)
  12.         // ee = (a^2 – b^2) / a^2;
  13.         const double a = 6378245.0;
  14.         const double ee = 0.00669342162296594323;
  15.         //
  16.         // World Geodetic System ==> Mars Geodetic System
  17.         public static void transform(double wgLat, double wgLon, out double mgLat, out double mgLon)
  18.         {
  19.             if (outOfChina(wgLat, wgLon))
  20.             {
  21.                 mgLat = wgLat;
  22.                 mgLon = wgLon;
  23.                 return;
  24.             }
  25.             double dLat = transformLat(wgLon – 105.0, wgLat – 35.0);
  26.             double dLon = transformLon(wgLon – 105.0, wgLat – 35.0);
  27.             double radLat = wgLat / 180.0 * pi;
  28.             double magic = Math.Sin(radLat);
  29.             magic = 1 – ee * magic * magic;
  30.             double sqrtMagic = Math.Sqrt(magic);
  31.             dLat = (dLat * 180.0) / ((a * (1 – ee)) / (magic * sqrtMagic) * pi);
  32.             dLon = (dLon * 180.0) / (a / sqrtMagic * Math.Cos(radLat) * pi);
  33.             mgLat = wgLat + dLat;
  34.             mgLon = wgLon + dLon;
  35.         }
  36.         static bool outOfChina(double lat, double lon)
  37.         {
  38.             if (lon < 72.004 || lon > 137.8347)
  39.                 return true;
  40.             if (lat < 0.8293 || lat > 55.8271)
  41.                 return true;
  42.             return false;
  43.         }
  44.         static double transformLat(double x, double y)
  45.         {
  46.             double ret = –100.0 + 2.0 * x + 3.0 * y + 0.2 * y * y + 0.1 * x * y + 0.2 * Math.Sqrt(Math.Abs(x));
  47.             ret += (20.0 * Math.Sin(6.0 * x * pi) + 20.0 * Math.Sin(2.0 * x * pi)) * 2.0 / 3.0;
  48.             ret += (20.0 * Math.Sin(y * pi) + 40.0 * Math.Sin(y / 3.0 * pi)) * 2.0 / 3.0;
  49.             ret += (160.0 * Math.Sin(y / 12.0 * pi) + 320 * Math.Sin(y * pi / 30.0)) * 2.0 / 3.0;
  50.             return ret;
  51.         }
  52.         static double transformLon(double x, double y)
  53.         {
  54.             double ret = 300.0 + x + 2.0 * y + 0.1 * x * x + 0.1 * x * y + 0.1 * Math.Sqrt(Math.Abs(x));
  55.             ret += (20.0 * Math.Sin(6.0 * x * pi) + 20.0 * Math.Sin(2.0 * x * pi)) * 2.0 / 3.0;
  56.             ret += (20.0 * Math.Sin(x * pi) + 40.0 * Math.Sin(x / 3.0 * pi)) * 2.0 / 3.0;
  57.             ret += (150.0 * Math.Sin(x / 12.0 * pi) + 300.0 * Math.Sin(x / 30.0 * pi)) * 2.0 / 3.0;
  58.             return ret;
  59.         }
  60.     }
  61. }

以上参考自:http://www.xue5.com/Mobile/iOS/679842.html
 

3.百度在线转换API

  1. http://api.map.baidu.com/ag/coord/convert?from=0&to=4&x=longitude&y=latitude
  2. from: 来源坐标系   (0表示原始GPS坐标,2表示Google坐标)
  3. to: 转换后的坐标  (4就是百度自己啦,好像这个必须是4才行)
  4. x: 精度
  5. y: 纬度
  1. http://api.map.baidu.com/ag/coord/convert?from=0&to=4&x=longitude&y=latitude
  2. from: 来源坐标系   (0表示原始GPS坐标,2表示Google坐标)
  3. to: 转换后的坐标  (4就是百度自己啦,好像这个必须是4才行)
  4. x: 精度
  5. y: 纬度

请求之后会返回一串Json

  1. {
  2.     “error”:0,
  3.     “x”:“MTIxLjUwMDIyODIxNDk2”,
  4.     “y”:“MzEuMjM1ODUwMjYwMTE3”
  5. }
  6. error:是结果是否出错标志位,“0”表示OK
  7. x: 百度坐标系的精度(Base64加密)
  8. y: 百度坐标系的纬度(Base64加密)
  1. {
  2.     “error”:0,
  3.     “x”:“MTIxLjUwMDIyODIxNDk2”,
  4.     “y”:“MzEuMjM1ODUwMjYwMTE3”
  5. }
  6. error:是结果是否出错标志位,“0”表示OK
  7. x: 百度坐标系的精度(Base64加密)
  8. y: 百度坐标系的纬度(Base64加密)

什么情况,经纬度居然还加密?那接下来也只好见招拆招了

  1. import java.io.BufferedReader;
  2. import java.io.IOException;
  3. import java.io.InputStream;
  4. import java.io.InputStreamReader;
  5. import java.io.OutputStreamWriter;
  6. import java.net.URL;
  7. import java.net.URLConnection;
  8. import com.sun.org.apache.xerces.internal.impl.dv.util.Base64;
  9. public class BaiduAPIConverter extends Thread {
  10.   public static void testPost(String x, String y) throws IOException {
  11.     try {
  12.       URL url = new URL(http://api.map.baidu.com/ag/coord/convert?from=2&to=4&x=&#8221;+ x + “&y=” + y);
  13.       URLConnection connection = url.openConnection();
  14.       connection.setDoOutput(true);
  15.       OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream(), “utf-8”);
  16.       // remember to clean up
  17.       out.flush();
  18.       out.close();
  19.       // 一旦发送成功,用以下方法就可以得到服务器的回应:
  20.       String sCurrentLine, sTotalString;
  21.       sCurrentLine = sTotalString = “”;
  22.       InputStream l_urlStream;
  23.       l_urlStream = connection.getInputStream();
  24.       BufferedReader l_reader = new BufferedReader(new InputStreamReader(l_urlStream));
  25.       while ((sCurrentLine = l_reader.readLine()) != null) {
  26.         if (!sCurrentLine.equals(“”))
  27.           sTotalString += sCurrentLine;
  28.       }
  29.       sTotalString = sTotalString.substring(1, sTotalString.length() – 1);
  30.       String[] results = sTotalString.split(“\\,”);
  31.       if (results.length == 3) {
  32.         if (results[0].split(“\\:”)[1].equals(“0”)) {
  33.           String mapX = results[1].split(“\\:”)[1];
  34.           String mapY = results[2].split(“\\:”)[1];
  35.           mapX = mapX.substring(1, mapX.length() – 1);
  36.           mapY = mapY.substring(1, mapY.length() – 1);
  37.           mapX = new String(Base64.decode(mapX));
  38.           mapY = new String(Base64.decode(mapY));
  39.           System.out.println(“\t” + mapX + “\t” + mapY);
  40.         }
  41.       }
  42.      sleep(10000);
  43.     } catch (InterruptedException e) {
  44.       // TODO Auto-generated catch block
  45.       e.printStackTrace();
  46.     }
  47.   }
  48.   /**
  49.    * @param args
  50.    * @throws IOException
  51.    */
  52.   public static void main(String[] args) throws IOException {
  53.     testPost(“120.151379”“30.184678”);
  54.     System.out.println(“ok”);
  55.   }
  56. }
  1. import java.io.BufferedReader;
  2. import java.io.IOException;
  3. import java.io.InputStream;
  4. import java.io.InputStreamReader;
  5. import java.io.OutputStreamWriter;
  6. import java.net.URL;
  7. import java.net.URLConnection;
  8. import com.sun.org.apache.xerces.internal.impl.dv.util.Base64;
  9. public class BaiduAPIConverter extends Thread {
  10.   public static void testPost(String x, String y) throws IOException {
  11.     try {
  12.       URL url = new URL(http://api.map.baidu.com/ag/coord/convert?from=2&to=4&x=&#8221;+ x + “&y=” + y);
  13.       URLConnection connection = url.openConnection();
  14.       connection.setDoOutput(true);
  15.       OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream(), “utf-8”);
  16.       // remember to clean up
  17.       out.flush();
  18.       out.close();
  19.       // 一旦发送成功,用以下方法就可以得到服务器的回应:
  20.       String sCurrentLine, sTotalString;
  21.       sCurrentLine = sTotalString = “”;
  22.       InputStream l_urlStream;
  23.       l_urlStream = connection.getInputStream();
  24.       BufferedReader l_reader = new BufferedReader(new InputStreamReader(l_urlStream));
  25.       while ((sCurrentLine = l_reader.readLine()) != null) {
  26.         if (!sCurrentLine.equals(“”))
  27.           sTotalString += sCurrentLine;
  28.       }
  29.       sTotalString = sTotalString.substring(1, sTotalString.length() – 1);
  30.       String[] results = sTotalString.split(“\\,”);
  31.       if (results.length == 3) {
  32.         if (results[0].split(“\\:”)[1].equals(“0”)) {
  33.           String mapX = results[1].split(“\\:”)[1];
  34.           String mapY = results[2].split(“\\:”)[1];
  35.           mapX = mapX.substring(1, mapX.length() – 1);
  36.           mapY = mapY.substring(1, mapY.length() – 1);
  37.           mapX = new String(Base64.decode(mapX));
  38.           mapY = new String(Base64.decode(mapY));
  39.           System.out.println(“\t” + mapX + “\t” + mapY);
  40.         }
  41.       }
  42.      sleep(10000);
  43.     } catch (InterruptedException e) {
  44.       // TODO Auto-generated catch block
  45.       e.printStackTrace();
  46.     }
  47.   }
  48.   /**
  49.    * @param args
  50.    * @throws IOException
  51.    */
  52.   public static void main(String[] args) throws IOException {
  53.     testPost(“120.151379”“30.184678”);
  54.     System.out.println(“ok”);
  55.   }
  56. }

到这里也差不多好了,主要的代码都写出来了,其他的您就自己写吧。

以上参考自:http://scalpel.me/archives/136/

四、重点啊,原来百度有内置转换方法,这下可以不局限于百度定位SDK了

在百度地图中取得WGS-84坐标,调用如下方法:
BMapManager.getLocationManager().setLocationCoordinateType(MKLocationManager.MK_COORDINATE_WGS84);
这样从百度api中取得的坐标就是WGS-84了,可是这种坐标如果显示到百度地图上就会偏移,也就是说取出一个坐标,原封不动的显示上去就偏移了,所以为了显示也是正常就需要在绘制到百度地图上之前转换成BD-09。
转换成BD-09,调用方法:
GeoPoint wgs84;
GeoPoint bd09 = CoordinateConvert.bundleDecode(CoordinateConvert.fromWgs84ToBaidu(wgs84));
这里实在不明白为何要设计成CoordinateConvert.fromWgs84ToBaidu(wgs84)返回了一个Bundle,所以还需要CoordinateConvert.bundleDecode()再转成GeoPoint。

0

How To Build Your Own Rogue GSM BTS For Fun And Profit

$
0
0

How To Build Your Own Rogue GSM BTS For Fun And Profit

31 Mar 2016 in HACKING GSM BTS YATEBTS ROGUE BTS EVILBTS YATE BLADERF BLADERF X40 RF GSM HIJACKING GSM INTERCEPT GSM SNIFFING
76.9K346
The last week I’ve been visiting my friend and colleque Ziggy in Tel Aviv which gave me something I’ve been waiting for almost a year, a brand new BladeRF x40, a low-cost USB 3.0 Software Defined Radio working in full-duplex, meaning that it can transmit and receive at the same time ( while for instance the HackRF is only half-duplex ).

In this blog post I’m going to explain how to create a portable GSM BTS which can be used either to create a private ( and vendor free! ) GSM network or for GSM active tapping/interception/hijacking … yes, with some (relatively) cheap electronic equipment you can basically build something very similar to what the governments are using from years to perform GSM interception.

I’m not writing this post to help script kiddies breaking the law, my point is that GSM is broken by design and it’s about time vendors do something about it considering how much we’re paying for their services.

my bts

Hardware Requirements

In order to build your BTS you’ll need the following hardware:

Software

Let’s start by installing the latest Raspbian image to the micrsd card ( use the “lite” one, no need for UI😉 ), boot the RPI, configure either the WiFi or ethernet and so forth, at the end of this process you should be able to SSH into the RPI.

Next, install a few dependecies we’re gonna need soon:

sudo apt-get install git apache2 php5 bladerf libbladerf-dev libbladerf0 automake

At this point, you should already be able to interact with the BladeRF, plug it into one of the USB ports of the RPI, dmesg should be telling you something like:

[ 2332.071675] usb 1-1.3: New USB device found, idVendor=1d50, idProduct=6066
[ 2332.071694] usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 2332.071707] usb 1-1.3: Product: bladeRF
[ 2332.071720] usb 1-1.3: Manufacturer: Nuand
[ 2332.071732] usb 1-1.3: SerialNumber: b4ef330e19b718f752759b4c14020742

Start the bladeRF-cli utility and issue the version command:

pi@raspberrypi:~ $ sudo bladeRF-cli -i
bladeRF> version

  bladeRF-cli version:        0.11.1-git
  libbladeRF version:         0.16.2-git

  Firmware version:           1.6.1-git-053fb13-buildomatic
  FPGA version:               0.1.2

bladeRF>

IMPORTANT Make sure you have these exact versions of the firmware and the FPGA, other versions might not work in our setup.

Now we’re going to install Yate and YateBTS, two open source softwares that will make us able to create the BTS itself.

Since I spent a lot of time trying to figure out which specific version of each was compatible with the bladeRF, I’ve created a github repository with correct versions of both, so in your RPI home folder just do:

git clone https://github.com/evilsocket/evilbts.git
cd evilbts

Let’s start building both of them:

cd yate
./autogen.sh
./configure --prefix=/usr/local
make -j4
sudo make install
sudo ldconfig
cd ..

cd yatebts
./autogen.sh
./configure --prefix=/usr/local
make -j4
sudo make install
sudo ldconfig

This will take a few minutes, but eventually you’ll have everything installed in your system.

Next, we’ll symlink the NIB web ui into our apache www folder:

cd /var/www/html/
sudo ln -s /usr/local/share/yate/nib_web nib

And grant write permission to the configuration files:

sudo chmod -R a+w /usr/local/etc/yate

You can now access your BTS web ui from your browser:

http://ip-of-your-rpi/nib

Time for some configuration now!

Configuration

Open the /usr/local/etc/yate/ybts.conf file either with nano or vi and update the following values:

Radio.Band=900
Radio.C0=1000
Identity.MCC=YOUR_COUNTRY_MCC
Identity.MNC=YOUR_OPERATOR_MNC
Identity.ShortName=MyEvilBTS
Radio.PowerManager.MaxAttenDB=35
Radio.PowerManager.MinAttenDB=35

You can find valid MCC and MNC values here.

Now, edit the /usr/local/etc/yate/subscribers.conf:

country_code=YOUR_CONTRY_CODE
regexp=.*

WARNING Using the .* regular expression will make EVERY GSM phone in your area connect to your BTS.

In your NIB web ui you’ll see something like this:

NIB

Enable GSM-Tapping

In the “Tapping” panel, you can enable it for both GSM and GPRS, this will basically “bounce” every GSM packet to the loopback interface, since we haven’t configure any encryption, you’ll be able to see all the GSM traffic by simply tcpdump-ing your loopback interface😀

tapping

Start It!

Finally, you can start your new BTS by executing the command ( with the BladeRF plugged in! ) :

sudo yate -s

If everything was configured correctly, you’ll see a bunch of messages and the line:

Starting MBTS...
Yate engine is initialized and starting up on raspberrypi
RTNETLINK answers: File exists
MBTS ready

At this point, the middle LED for your bladeRF should start blinking.

Test It!

Now, phones will start to automatically connect, this will happen because of the GSM implementation itself:

  • You can set whatever MCC, MNC and LAC you like, effectly spoofing any legit GSM BTS.
  • Each phone will search for BTS of its operator and select the one with the strongest signal … guess which one will be the strongest? Yep … ours😀

Here’s a picture taken from my Samsung Galaxy S6 ( using the Network Cell Info Lite app ) which automatically connected to my BTS after 3 minutes:

MyEvilBTS

From now on, you can configure the BTS to do whatever you want … either act as a “proxy” to a legit SMC ( with a GSM/3g USB dongle ) and sniff the unencrypted GSM traffic of each phone, or to create a private GSM network where users can communicate for free using SIP, refer to theYateBTS Wiki for specific configurations.

Oh and of course, if you plug the USB battery, the whole system becomes completely portable:)

References and Further Readings



GSM BTS Hacking: 利用BladeRF和开源BTS 5搭建基站

$
0
0

引文

如果你已经购买了Nuand(官方)BladeRF x40,那么就可以在上面运行OpenBTS并可以输入一些指令来完成一些任务。一般来说HackRF,是一款覆盖频率最宽的SDR板卡。它几乎所有的信息都是开源的,甚至包括KiCad文件。缺点是它没有FPGA,使用的低速的USB2接口,ADC/DAC的精度比较低。 

再使用 bladeRF 板卡时需要注意两个“镜像”:固件 (firmware) 镜像与 FPGA 镜像。二者是两个不同的概念。但是业界叫法不一,有时候会把二者混为一谈。一般而言,固件指的是嵌入到硬件设备中的软件,存放在只读存储器 (ROM) 或者闪存 (flash) 中,一般不易修改,修改的操作称为“刷新”(flashing)。固件这个名词最初和微代码相关,不过 bladeRF 里源代码是嵌入式 C 程序。FPGA 全名为可编程门阵列,其门电路、寄存器连接可以编程重构,其源程序一般是硬件描述语言 (HDL),通过综合 (synthesis) 等步骤得到二进制文件。在 bladeRF 板卡上,FPGA 只是一块 Altera 芯片。在没有内置非挥发存储时,FPGA 镜像需要每次上电时重新加载,bladeRF 就是这种情况。所以在拿到板卡时,上面已有固件,但还没有 FPGA 镜像。下面本文会具体说明在使用 bladeRF 时如何刷新固件、加载/更新 FPGA 镜像、以及如何自动加载 FPGA 镜像。注意,有时为了避免混淆,会称 FPGA 镜像为 FPGA 比特流,或者 FPGA 配置(因为它就是配置了门电路等组件的连接)。

本文中介绍的工具、技术带有一定的攻击性,请合理合法使用。

系统:

Ubuntu 12.04 LTS Server (32位)下载:(点击我

升级git版本

sudo apt-get install software-properties-common python-software-properties
sudo add-apt-repository ppa:git-core/ppa (press enter to continue)
sudo apt-get update
sudo apt-get install git

安装一些前提软件包。

设置好之后,开始安装。

将下面代码复制粘贴,运行-将会开始安装:

sudo apt-get install $(
    wget -qO - https://raw.githubusercontent.com/RangeNetworks/dev/master/build.sh | \
    grep installIfMissing | \
    grep -v "{" | \
    cut -f2 -d" ")

另外一种安装方式是通过在所有文件中搜索文本文件 IFMissing,然后再通过apt-get 命令来安装它。当然你也可以直接通过程序代码来操作。

值得注意的是,安装时候有两个安装包会失败(libzmq3 & libzmq3-devel)-但可以在Ubuntu系统下直接安装。

$ sudo add-apt-repository ppa:chris-lea/zeromq
$ sudo apt-get update
$ sudo apt-get install libzmq3-dbg libzmq3-dev 

下一步是安装uhd,当然还要注意GNURadio。运行下面的指令将会执行安装,当然这回花掉一些时间,而这也取决于你的电脑。

wget http://www.sbrac.org/files/build-gnuradio && chmod a+x ./build-gnuradio && ./build-gnuradio

一旦执行完成,你将会收到失败或成功的提示信息。如果失败可以选择重新安装,并查看相关信息来解决问题。下面将会安装和配置 OpenBTS相关软件,包括:libgsm1-dev 、asterisk-dev 、asterisk-config。安装

$ sudo apt-get install libgsm1-dev asterisk-dev asterisk-config

当然你可以自主选择是否安装libusb,请注意不是 libusbx 。安装可以到www.libusb.org页面下载,然后将其复制到 /usr/src目录下。将/usr/lib/x86_64-linux-gnu/libusb.so原始文件备份后覆盖它。

安装OpenBTS

完成上面的事情之后,现在就开始安装它吧。

1.为其(OpenBTS)创建一个目录(结合实际情况)

2.然后安装并运行它

#!/bin/bash

git clone https://github.com/RangeNetworks/openbts.git
git clone https://github.com/RangeNetworks/smqueue.git
git clone https://github.com/RangeNetworks/subscriberRegistry.git

#From here and downwards you can copy&paste (that's why the ';' are for)
for D in *; do (
    echo $D;
    echo "=======";
    cd $D;
    git clone https://github.com
/RangeNetworks/CommonLibs.git;
    git clone https://github.com/RangeNetworks/NodeManager.git);
done;
git clone https://github.com/RangeNetworks/libcoredumper.git;
git clone https://github.com/RangeNetworks/liba53.git

3.创建 libcoredumper

cd libcoredumper;
./build.sh && \
   sudo dpkg -i *.deb;
cd ..

4.创建 liba53

cd liba53;
make && \
   sudo make install;
cd ..;

5.在同一目录下,check out“YateBTS”

svn checkout http://voip.null.ro/svn/yatebts/trunk yatebts

6.下一步去掉 FPGA(自动加载)信息,然后加载并打开它

vim ./yatebts/mbts/TransceiverRAD1/bladeRFDevice.cpp

从#ifdef(108行)到#endif(129行)结束,这之间是空的,应该为后来留为备用的。

7.更换目录(YateBTS)然后运行 autogen.sh

$ cd opbts/yatebts
$ ./autogen.sh

8.这样就可以创建配置文件,如果你立刻运行并配置脚本,将会出现错误然后出现搜索YATE信息。所以先要打开它配置

$vim configure

能够找到as_fn_err $吗?能够找到$LINENO变量(开源软件 Yate)并替换吗?那么进行下一步吧

9.重新配置

./configure

10.这个时候需要开源软件YateBTS两个文件目录

a) Peering

$ cd /home/openbts/obts/yatebts/mbts/Peering
$ make

b) TransceiverRAD1

$ cd /home/openbts/obts/yatebts/mbts/TransceiverRAD1
$ make

11.复制两个文件到 OpenBTS 文件目录下

$ cd ..
$ cp ./yatebts/mbts/TransceiverRAD1/transceiver-bladerf openbts/apps/
$ cd openbts/apps/
$ ln -sf transceiver-bladerf transceiver

12.编译OpenBTS

$ cd /home/openbts/obts/openbts
$ ./autogen.sh
$ ./configure --with-uhd
$ make

13.下一步配置SQL-lite软件库(bladeRF),需要做一些修改

vim /home/openbts/obts/openbts/apps/OpenBTS.example.sql

查询并替换以下信息

完成并进行下一步

14.创建OpenBTS配置目录

$ sudo mkdir /etc/OpenBTS

15.在OpenBTS目录下,安装软件库

$ sudo sqlite3 -init ./apps/OpenBTS.example.sql /etc/OpenBTS/OpenBTS.db ".quit"

16.一旦完成,下一步可以通过命令来测试它

$ sqlite3 /etc/OpenBTS/OpenBTS.db .dump

如果看到了大量的输出数据信息,那么就表明成功了。进行下一步

17.通过命令运行OpenBTS

$ cd /home/openbts/obts/openbts/apps
$ sudo ./OpenBTS

如果看见系统启动,你的基站准备好并启动它,如果使用手机搜寻附近网络,应该出现测试PLMN网络的测试信息(00101)。

1.jpg

3.jpg

18.如果开始上面的测试,退出openBTS然后安装用户注册表(sipauthserve以及smqueue),需要这些才能够运行openBTS。没有这些,手机不会连接测试网络。

19. 对于用户注册表,必须要创建一个文件目录,即/var/lib/asterisk/sqlite3dir,创建它

$ sudo mkdir -p /var/lib/asterisk/sqlite3dir

20.创建 sipauthserve

$ cd subscriberRegistry
$ ./autogen.sh
$ ./configure
$ make

在/home/openbts/obts/subscriberRegistry/apps目录下创建

21.下一步配置 sipauthserve

$ cd /home/openbts/obts/subscriberRegistry
$ sudo sqlite3 -init subscriberRegistry.example.sql /etc/OpenBTS/sipauthserve.db ".quit"

22.下一步安装SMQUEUE,其与SubscriberRegistry.h文件有关联,可以在其目录修复它

$ cd /home/openbts/obts/smqueue
$ ln -s /home/openbts/obts/subscriberRegistry/ SR
$ autoreconf -i
$ ./configure
$ make

23.一旦完成之后,就需要修改其配置文件

$ cd /home/openbts/obts/smqueue
$ sudo sqlite3 -init smqueue/smqueue.example.sql /etc/OpenBTS/smqueue.db ".quit"

bladeRF 固件升级与FPGA镜像加载

24.在https://github.com/Nuand/bladeRF/wiki/Upgrading-bladeRF-firmware升级固件

25.完成之后在http://www.nuand.com/fpga.php下载镜像(FPGA)

26.加载FPGA镜像

$ bladeRF-cli -L <path to fpga image file>

这步一定要有耐心,不要突然终止,别让板子变成砖了。

27. 完成之后,开始运行之前的配置的服务吧。

$ cd /home/openbts/obts/smqueue
$ sudo ./smqueue &

$ cd /home/openbts/obts/subscriberRegistry/apps
$ sudo ./sipauthserve &

$ cd /home/openbts/obts/openbts/apps
$ sudo ./OpenBTS &

28.启动OpenBTSCLI

$ cd /home/openbts/obts/openbts/apps/
$ sudo ./OpenBTSCLI

29. 默认情况下, OpenBTS不会接受额外的登记信息,需要做到下面几点

a) 输入你手机的IMSI(国际移动用户识别码)

b) 设置所有的IMSI号可以被登记

config Control.LUR.OpenRegistration .*

这么做将会导致信号范围内所有的手机连接到你配置的基站上面,包括(隔壁的妹子or老王)。

现在你可以在手机上能够搜索到基站网络了,可以拨打服务电话(作者那边是600)做测试(Asterisk)。

了解更多

[1] https://github.com/Nuand/bladeRF/wiki/Minimalistic-build-and-run-test-for-OpenBTS-5

[2] https://wush.net/trac/rangepublic/wiki/BuildInstallRun#ConfiguringOpenBTS

[3] https://wush.net/trac/rangepublic/wiki/CommonErrors

[4] http://openbts.org/w/index.php/Main_Page

[5] https://github.com/Nuand/bladeRF/wiki/Upgrading-bladeRF-firmware

*参考来源:linux.net.pk,FB小编亲爱的兔子编译,转载请注明来自FreeBuf黑客与极客(FreeBuf.COM)


React Native Component For Maps With An Extensive Feature Set

$
0
0

React Native Maps is an open source component allowing you to create map views with React Native on both iOS and Android.

React Native Maps has many great features including:

Usage of arbirtrary react views as custom markers and callouts
Events for tracking movement, map touches, callout, and marker touches
Location support with easy animating to the specified location
Usage of other react views as markers so you can easily create custom markers
Easy gesture support using the Animated API
Polygon drawing on the maps
Normal map style markers
Draggable markers

React Native Maps

You can find React Native Maps on Github here.

A great component for those looking to use maps in React Native.


Using BMP180 for temperature, pressure and altitude measurements

$
0
0

http://embedded-lab.com/blog/bmp180/#sthash.AwKiQ3NJ.dpuf

The BMP180 is a new generation digital barometric pressure and temperature sensor from Bosch Sensortec. In this tutorial, we will briefly review this device and describe how to interface it with an Arduino Uno board for measuring the surrounding temperature and pressure. We will also discuss about retrieving the sensor altitude from its pressure readings.

asass

Experiment setup

Bosch Sensortag’s BMP180 is an ultra low-power digital temperature and pressure sensor with high accuracy and stability. It consists of a piezo-resistive sensor, an analog to digital converter and a control unit with EEPROM and a serial I2C interface. The raw measurements of pressure and temperature from the BMP180 sensor has to be compensated for temperature effects and other parameters using the calibration data saved into the EEPROM. In this tutorial, we will use an Arduino board to read the temperature and barometric pressure measurements from the BMP180 sensor and display the data on an 1.44″ ILI9163-based TFT display. If you would like to repeat this experiment, you will need the following things.

1. Any Arduino board running at 3.3V. I am using Crowduino Uno board from Elecrow, which has an onboard slide switch to select the operating voltage between 3.3V and 5.0V. If you want to use this board, make sure the switch is slided to 3.3V position.

Crowduino Uno board from Elecrow

2. BMP180 sensor module

BMP180 sensor breakout module

3. ILI9163-based TFT display (I am using one with 1.44″ size display from Elecrow).

1.44" TFT display (ILI9163 driver)

4. A breadboard and few jumper wires for hooking up the sensor and the display to the Arduino board.

The following diagram describes the experimental setup for this tutorial. The BMP180 and the TFT display are both powered by 3.3V. The BMP180 supports I2C interface and therefore the SDA and SCL pins go to A4 and A5 pins of the Arduino board. The ILI9163 TFT driver supports SPI interface. The table shown on the right side of the diagram below describes the wiring between the display and Arduino. The I2C and SPI pin names are printed on the bottom layer silkscreen of the BMP180 and the TFT display modules.

Sensor and display setup

Here is the actual setup for this experiment made on a breadboard.

BMP180 sensor hookup with Arduino

Arduino firmware

For sensor readings, I am using the BMP180 Arduino Library from Love Electronics Ltd (I am not sure if this company exists now or not bu that’s what it says in the library). You need to download it (link provided below) and install this library into your Arduino/libraries/ location.

Download BMP180 Library

For the ILI9163 TFT LCD, I am using another free and open source Arduino Library called TFT_ILI9163C, which you can download from the following link.

Download TFT_ILI9163C Arduino Library

The TFT library uses Adafruit_GFX libraries for fonts, so you need to download and install it too.

Download Adafruit_GFX_Library

After installing both of these libraries, it’s time to write firmware for the Arduino. The firmware I have written and shared below displays temperature in Celsius and Fahrenheit scales and barometric pressure in millibar and inHg. In order to compute the sensor altitude, we need to know the reference surface pressure value as discussed in the following section.

Important Note about retrieving sensor altitude

Please note that the BMP180 sensor provides absolute measurements for temperature and pressure, but do not give a direct output for the altitude. Since the atmospheric pressure reduces with altitude, you can find out the vertical displacement of the sensor by knowing the reference pressure value at the ground. For example, in order to compute the sensor altitude, say from the sea level, you need to know the current mean sea level pressure at your local place. The mean sea-level pressure is not a constant and varies diurnally with ambient temperature and weather patterns. An easiest way to find out the current sea-level pressure would be to check the website of your nearest airport or the national weather service. They usually update it on their website hourly or so. I live in Williamsburg, VA and I checked the mean sea-level pressure from Weather.gov website. At the time I was doing this experiment, the mean sea level pressure was 1027.7 millibars or 102770 Pascal. In the Arduino code below (float seaLevelPressure = 102770;), I used this value for the mean sea level pressure and used the difference in the sensor read pressure and this value to compute the altitude of the sensor location, which was the second floor of my house in Williamsburg, VA. Therefore, in order to compute the altitude of your sensor location, you have to replace this value with your current local sea-level pressure value in Pascal (1 millibar = 100 Pascal). With the knowledge of the local sea-level pressure, the Arduino firmware below also displays the altitude above the sea level in feet and meters. 

Mean sea-level pressure data

Here is a complete Arduino code for this project. I would recommend to use the download file below instead of copying and pasting the code from here, which does not work sometime.

#include <Wire.h>
#include <BMP180.h>
#include <SPI.h>
#include <Adafruit_GFX.h>
#include <TFT_ILI9163C.h>
// Define pins for ILI9163 SPI display
#define __CS 10
#define __DC 9 // Labeled as A0 in some modules
#define __RST 8
// Connect SDA to Arduino pin 11 (MOSI), and SCK to 13 (SCK)
// Color definitions
#define BLACK 0x0000
#define BLUE 0x001F
#define RED 0xF800
#define GREEN 0x07E0
#define CYAN 0x07FF
#define MAGENTA 0xF81F
#define YELLOW 0xFFE0
#define WHITE 0xFFFF
#define TRANSPARENT -1
TFT_ILI9163C display = TFT_ILI9163C(__CS, __DC, __RST);
// Store an instance of the BMP180 sensor.
BMP180 barometer;
// Store the current sea level pressure at your location in Pascals.
float seaLevelPressure = 102770; // Williamsburg, VA on Dec 31, 2014, 14:54 Eastern Time
void setup()
{
 display.begin();
 display.setBitrate(24000000);
 display.setRotation(2);
 display.clearScreen();
 // We start the serial library to output our messages.
 Serial.begin(9600);
 // We start the I2C on the Arduino for communication with the BMP180 sensor.
 Wire.begin();
 // We create an instance of our BMP180 sensor.
 barometer = BMP180();
 // We check to see if we can connect to the sensor.
 if(barometer.EnsureConnected())
 {
 Serial.println("Connected to BMP180."); // Output we are connected to the computer.
 // When we have connected, we reset the device to ensure a clean start.
 barometer.SoftReset();
 // Now we initialize the sensor and pull the calibration data.
 barometer.Initialize();
 }
 else
 {
 Serial.println("No sensor found.");
 }
}
void loop()
{
 if(barometer.IsConnected)
 {
 // Retrive the current pressure in Pascals.
 long currentPressureP = barometer.GetPressure();
 float currentPressuremb = currentPressureP/100.0;
 float currentPressureinHg = currentPressuremb*0.02953;

 // Print out the Pressure.
 Serial.print("Pressure: ");
 Serial.print(currentPressureP);
 Serial.println(" Pa");
 Serial.print("Pressure: ");
 Serial.print(currentPressuremb);
 Serial.println(" mbar");
 Serial.print("Pressure: ");
 Serial.print(currentPressureinHg);
 Serial.println(" inHg");
 // Retrive the current altitude (in meters). Current Sea Level Pressure is required for this.
 float altitudem = barometer.GetAltitude(seaLevelPressure);
 float altitudeft = altitudem*3.2808;
 // Print out the Altitude.
 Serial.print("\tAltitude: ");
 Serial.print(altitudem);
 Serial.print(" m");
 Serial.print("\tAltitude: ");
 Serial.print(altitudeft);
 Serial.print(" ft");

 // Retrive the current temperature in degrees celcius.
 float currentTemperatureC = barometer.GetTemperature();
 float currentTemperatureF = (9.0/5.0)*currentTemperatureC+32.0;
 // Print out the Temperature
 Serial.print("\tTemperature: ");
 Serial.print(currentTemperatureC);
 Serial.write(176);
 Serial.print("C");
 Serial.print(currentTemperatureF);
 Serial.write(176);
 Serial.print("F");
 Serial.println(); // Start a new line.

 // Now display results on LCD

 display.fillScreen();
 display.setCursor(0, 0);
 display.setTextColor(WHITE);
 display.setTextSize(1);
 display.print("BMP180 Sensor Demo");

 // Display temperature in F
 display.setCursor(0, 16);
 display.setTextColor(YELLOW);
 display.setTextSize(2);
 display.print("T=");
 display.print(currentTemperatureF);
 display.setTextSize(1);
 display.print(" o");
 display.setTextSize(2);
 display.print("F");
 // Display temperature in C
 display.setCursor(24, 32);
 display.print(currentTemperatureC);
 display.setTextSize(1);
 display.print(" o");
 display.setTextSize(2);
 display.print("C");

 //Now display pressure in mbar
 display.setCursor(0, 48);
 display.setTextColor(CYAN);
 display.setTextSize(2);
 display.print("P=");
 display.print(currentPressuremb,1);
 display.print("mb");
 // Display pressure in inHg
 display.setCursor(24, 64);
 display.setTextColor(CYAN);
 display.print(currentPressureinHg,1);
 display.print("inHg");

 //Now display pressure in mbar
 display.setCursor(0, 80);
 display.setTextColor(WHITE);
 display.setTextSize(2);
 display.print("H=");
 display.print(altitudeft,1);
 display.print("ft");
 // Display pressure in inHg
 display.setCursor(24, 96);
 display.setTextColor(WHITE);
 display.print(altitudem,1);
 display.print("m");

 delay(5000); // Show new results every second.
 }
}

Download the Arduino sketch here

Output

The sensor altitude shown was about 88 feet above the sea level, which seems to be right as compared to the city data published here: http://en.wikipedia.org/wiki/Williamsburg,_Virginia

asasa

The sensor is very sensitive to altitude. The following measurement was taken by placing the sensor on my dining table in the first floor. The altitude was reduced by ~8 feet, which seems to be reasonable.

asasas

Looking for a free tool for circuit designing? We recommend you to try EasyEDA.

– See more at: http://embedded-lab.com/blog/bmp180/#sthash.AwKiQ3NJ.dpuf


极客DIY:使用树莓派搭建Tor节点,实现科学上网

$
0
0

我们的目标是:用树莓派实现-硬件Tor,通电自动连接Tor节点,所有流量全部强制通过Tor节点引出,到达目标地址。断线无限重连。不管是手机,还是平板,还是PC,只要连接到树莓派之后,全部实现全程Tor节点流量,实现科学上网。

0×01:前期准备

1.1:准备硬件:

IMG_20160412_091607.jpg

1.2:安装系统

下载kali-2.1.2-rpi.img,并且使用win32diskimager写入SD卡。

0x01.png

将电源+电源线+树莓派B板+无线网卡+SD卡接好,通电,连接到家庭路由器上,此处为常用的姿势,不可能插错的,所以就不上图了。树莓派上四个灯都点亮了之后,进入家庭路由器网关,找到树莓派的IP地址,用Putty软件SSH连到树莓派上,kali的SSHD是默认开启的,账号是root,密码是toor。连接时会弹出是否接受SSH秘钥,选择“是”接受。连接成功的正确姿势是这样的:

屏幕截图_041216_101249_AM.jpg

1.3:添加源&更新

如果不更新,后面很多软件会无法进行自动化安装。Vi打开/etc/apt/sources.list,在源里添加以下内容,然后进行更新,apt-get update && apt-get upgrade。根据网速可能需要若干小时,因为要连欧洲服务器,所以速度很龟毛。完成后手动reboot重启。

deb http://mirrors.ustc.edu.cn/kali kali main non-free contrib
deb-src http://mirrors.ustc.edu.cn/kali kali main non-free contrib
deb http://mirrors.ustc.edu.cn/kali-security kali/updates main contrib non-free
deb http://mirrors.aliyun.com/kali kali main non-free contrib
deb-src http://mirrors.aliyun.com/kali kali main non-free contrib
deb http://mirrors.aliyun.com/kali-security kali/updates main contrib non-free

1.4:可选项

恢复只有官方Wheezy镜像才自带的raspi-config功能:安装过原版系统的都知道,原版功能确实很便利。(下方文件可能有更新,可根据需求安装最新版)

wget http://archive.raspberrypi.org/debian/pool/main/r/raspi-config/raspi-config_20150131-1_all.deb
wget http://http.us.debian.org/debian/pool/main/t/triggerhappy/triggerhappy_0.3.4-2_armel.deb
wget http://http.us.debian.org/debian/pool/main/l/lua5.1/lua5.1_5.1.5-7.1_armel.deb #以上为下载主安装包和依赖包
dpkg -i triggerhappy_0.3.4-2_armel.deb
dpkg -i lua5.1_5.1.5-7.1_armel.deb
dpkg -i raspi-config_20150131-1_all.deb 

以上是安装依赖包和主包 注意要严格按照安装顺序
raspi-config的配置过程省略,网上遍地都是,以上准备工作结束。

屏幕截图_041216_062117_PM.jpg

0×02:配置网络

2.1:简介

这里采用的是无线网卡(wlan0)连接家庭无线网关上网,有线网口(eth0)作为hdcpd服务器连接AP分发IP地址作为网关收集client信息。这样做有几个优点:

1.wlan0可以后期变更为市面上有售的无线网卡,作为网络出口,或者可以连接手机热点,手机作为出口,达到移动网关的目的。这些均可以充电宝供电。

2,eth0连接AP,可以扩大信号范围,增强收集强度,在树莓派上设置wireshark拦截所有请求,达到中间人的目的,或者安装XAMPP自建钓鱼网站,骗取用户账户和密码。

3,任何终端均可自行连接到AP上,自动获得IP地址,对树莓派进行配置,比较方便。可以是电脑,平板,或者手机,在公共场合操作非常方便,不必一直扛着电脑。(改掉树莓派上的SSH的默认密码,否则任何人都可以连上你的树莓派。)

4,如果全部要求可移动,那可以增加一个wlan1作为dhcpd服务器分发IP,这样的缺点是信号比较弱。

现在开始搭建:

2.2:

无线网卡设置DHCP模式,连接上无线网:修改/etc/network/interfaces,添加:

allow-hotplug wlan0    #这是网卡wlan0
iface wlan0 inet dhcp   #把wlan0设置成dhcp模式,以自动获取ip地址
wpa-ssid ChinaNGB-WF   #这是你的ssid
wpa-psk my123456    #这是ssid的wifi密码
#wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf #把这一行注释掉
iface default inet dhcp   #默认dhcp模式

然后reboot,然后无线网卡上小灯开始闪呀闪,就知道已经开始工作了。然后再去路由器上找到e8开头的MAC地址(EPUB网卡),SSH连上去。

IMG_20160412_180107.jpg

输入ifconfig命令:会出现已经获取到的ip地址等等,说明已经成功连上无线。这时候你可以把有线断掉了,把eth0口腾空出来。
(PS:在户外的时候,没有家庭网络,但是我们有手机,可以共享无线热点出来,我们可以共享无线热点,同时连接上树莓派进行设置和监视,同时进行“工作”。此时要把wlan设置连上我们手机分享出的热点。)

屏幕截图_041216_060339_PM.jpg

2.3

把腾出来的eth0设置成静态地址:修改/etc/network/interfaces:

2.4:安装和配置

安装DHCP服务器为接进热点的设备分配IP:

apt-get install isc-dhcp-server

修改/etc/dhcp/dhcpd.conf:将里面所有的内容都#掉,末尾加上:

ddns-update-style none;default-lease-time 600;
max-lease-time 7200;authoritative;log-facility local7;
subnet 192.168.10.0 netmask 255.255.255.0 {  range 192.168.10.2 192.168.10.254;
option domain-name-servers 8.8.8.8;
option domain-name "raspberry";   
option routers 192.168.10.1; option broadcast-address 192.168.10.255;   

}→→→→→修改/etc/default/isc-dhcp-server:同理将所有内容#掉,末尾加上:

DHCPD_CONF="/etc/dhcp/dhcpd.conf"INTERFACES="eth0"

→→→→→isc-dhcp-server这个软件有一点小缺陷,需要自建一个leases文件:

touch /var/lib/dhcp/dhcp.leases

→→→→→启动isc-dhcp-server:

service isc-dhcp-server start

将isc-dhcp-server加入开机自启:

update-rc.d isc-dhcp-server enable0x02.4

添加iptables引导流量走向:打开流量转发:A.修改/etc/sysctl.conf:net.ipv4.ip_forward=1 B.修改/proc/sys/net/ipv4/ip_forward为1:

修改proc下的文件有点特殊 不可用vi,而是用echo命令的方式#echo 1 > /proc/sys/net/ipv4/ip_forward  添加转发规则:然后用iptables -t nat -S和iptables -S检查是否添加成功开NAT:

sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE wlan的全部进eth,全接受sudo iptables -A FORWARD -i wlan0 -o eth0 -m state –state RELATED,ESTABLISHED -j ACCEPTeth的全部进wlan,全接受# sudo iptables -A FORWARD -i eth0 -o wlan0 -j ACCEPT规则开机自添加# sh -c “iptables-save > /etc/iptables.ipv4.nat”#reboot此时已经可以连接eth0实现上网。打开水星的AP,设置SSID为i-Shanghai,密码为空,设置过程略。eth0连接到AP的lan口,启动AP,任何客户端连接到AP即自动连接到eth0口。

0×03

安装监视软件,监测树莓派实时动态→netdata ,如果需要年月周汇总动态,可以选择Monitorix(已亲测,可以安装。)。参考文档:https://github.com/firehol/netdata/wiki/Installation

3.1:安装netdata:安装所有的依赖包配置环境#apt-get install zlib1g-dev gcc make git autoconf autogen automake pkg-config

3.2:第一行从GitHub下载编译文件,然后cd到文件夹,然后运行编译:

git clone https://github.com/firehol/netdata.git --depth=1
cd netdata#./netdata-installer.sh

3.3:加入开机启动,在rc.local下加入/usr/sbin/netdata,然后reboot重启。

3.4:然后在任意浏览器打开 树莓派IP地址:19999,别忘了端口号,默认是19999,然后正确的姿势如下:

屏幕截图(9).png

正确配置和连接好后的拓补如下:

IMG_20160412_214252.jpg

0×04

好了,以上均为foreplay,前戏,现在进入正题。我们的目标是:(没有蛀牙!)用树莓派实现《硬件Tor》,通电自动连接Tor节点,所有流量全部强制通过Tor节点引出,到达目标地址。断线无限重连。不管是手机,还是平板,还是PC,只要连接到i-Shaghai之后,全部实现全程Tor节点流量,实现科学上网。

4.1:Tor是什么:Tor是加密互联网路由器,可以将你的流量加密后在Tor节点上至少进行三层跳板,跳板不定期随机耦合,到达目的网址,混淆你的IP地址,由于其加密传输,可以躲过·政·府·的关键词过滤探针,在国外广泛被应用于暗网入口。有了这款《硬件Tor》之后,可以随时随地进入暗网。本文不介绍如何进入暗网。

initpintu_副本.jpg

4.2:网桥是什么:以上这么多福利满满,在本国被禁掉也是理所当然的事情。所以在国内,第一步是连接上Tor节点之前,需要搭一座桥,以连接上节点服务器。这座桥就叫做网桥。第一代网桥只是简简单单的IP地址,现在的第三代抗干扰混淆网桥(obfsproxy)大概的姿势是这样的:可以突破封锁,直达国外。(以下网桥时效性不保证)

obfs3 37.187.65.72:35304 E47EC8C02C116B77D04738FA2E7B427F241A0164
obfs3 194.132.209.8:57356 B43A8BDE049073CA7AA7D3D46A7F97A93042DF35
obfs3 23.252.105.31:3443 CDAE9FD7710761D1914182F62B1B47F2FBF1FDE1

4.3:网桥这么宝贵,如何获取网桥,本文推荐的方式是去官网直接索取网桥。地址(需要科学上网才可以登录)是:https://bridges.torproject.org/bridges?transport=obfs3
4.4:接下来就是重头戏安装和配置Tor了:(以下需要在VPN环境下进行,无线网卡需要连接到VPN内部)在更新源里/etc/apt/sources.list添加以下两项:deb http://deb.torproject.org/torproject.org wheezy maindeb-src http://deb.torproject.org/torproject.org wheezy main更新和导出前面包和秘钥:#gpg –keyserver keys.gnupg.net –recv 886DDD89#gpg –export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -然后:更新源和安装最新版的Tor:apt-get updateapt-get install tor deb.torproject.org-keyring此时最新版的Tor就安装好了,截止发稿,版本为:0.2.7.6-1

屏幕截图_041316_124606_PM.jpg

4.5:安装obfsproxy:

#apt-get install obfsproxy   完成后最新版如下图:

屏幕截图_041316_010726_PM.jpg

4.6:接下来就是对Tor进行配置(离成功越来越近了,欣喜!):修改/etc/tor/torrc:

SocksPort 9050
SocksListenAddress 192.168.10.1:9050    # 树莓派IP
ClientOnly 1
VirtualAddrNetwork 10.192.0.0/10
DNSPort 53
DNSListenAddress 192.168.10.1
AutomapHostsOnResolve 1
AutomapHostsSuffixes .onion,.exit
TransPort 9040
TransListenAddress 192.168.10.1
Log notice file /var/log/tor/notices.log  # log日志路径
RunAsDaemon 1
ClientTransportPlugin obfs3 exec /usr/local/bin/obfsproxy managed        # obfsproxy路径
UseBridges 1
Bridge obfs3 37.187.65.72:35304 E47EC8C02C116B77D04738FA2E7B427F241A0164  #刚刚获取的网桥
Bridge obfs3 194.132.209.8:57356 B43A8BDE049073CA7AA7D3D46A7F97A93042DF35
Bridge obfs3 23.252.105.31:3443 CDAE9FD7710761D1914182F62B1B47F2FBF1FDE1

4.7:添加iptables规则

sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 22 -j REDIRECT --to-ports 22
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 19999 -j REDIRECT --to-ports 19999
sudo iptables -t nat -A PREROUTING -i eth0 -p udp --dport 53 -j REDIRECT --to-ports 53
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --syn -j REDIRECT --to-ports 9040

为了让上面的规则在开始时自动添加,执行:
# sh -c “iptables-save > /etc/iptables.ipv4.nat”
第一条命令为22端口开一个特例,这样SSH才能连上树莓派。
第二条命令为19999端口开一个特例,这样netdata才能连上树莓派。
第三条命令将所有DNS(端口号53)请求转发到配置文件torrc中的DNSPort中
第四条命令将所有TCP流量转发到配置文件torrc中的TransPort中

4.8:启动tor客户端进程:
# service tor start
在/var/log/tor/notices.log中查看tor启动情况,正常的是:

Apr 13 13:32:46.000 [notice] Bootstrapped 5%: Connecting to directory server
Apr 13 13:32:46.000 [warn] We were supposed to connect to bridge ’162.217.177.95:18869′ using pluggable transport ‘obfs4′, but we can’t find a pluggable transport proxy supporting ‘obfs4′. This can happen if you haven’t provided a ClientTransportPlugin line, or if your pluggable transport proxy stopped running.
Apr 13 13:32:47.000 [notice] Bootstrapped 10%: Finishing handshake with directory server
Apr 13 13:32:57.000 [notice] Bootstrapped 15%: Establishing an encrypted directory connection
Apr 13 13:32:58.000 [notice] Bootstrapped 20%: Asking for networkstatus consensus
Apr 13 13:32:58.000 [notice] Bootstrapped 25%: Loading networkstatus consensus
Apr 13 13:33:22.000 [notice] Bootstrapped 80%: Connecting to the Tor network
Apr 13 13:33:23.000 [notice] Bootstrapped 90%: Establishing a Tor circuit
Apr 13 13:33:28.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Apr 13 13:33:28.000 [notice] Bootstrapped 100%: Done

屏幕截图_041316_013613_PM.jpg

然后任何时候reboot,树莓派会自动启动,开始连接加密节点,同时eth0口等待你的终端进行连接,我们的目的已经实现。网络出口会自动间隔几分钟就会跳动一次,表现为出口IP会不断变化,隐藏你的真实身份。这时候,你可以畅享目前为止军方都无法破解的加密服务了。

* 作者:Roy_Chen,本文属FreeBuf原创奖励计划文章,未经许可禁止转载


手把手教你使用Hexo + Github Pages搭建个人独立博客

$
0
0

手把手教你使用Hexo + Github Pages搭建个人独立博客

系统环境配置

要使用Hexo,需要在你的系统中支持Nodejs以及Git,如果还没有,那就开始安装吧!

安装Node.js

下载Node.js
参考地址:安装Node.js

安装Git

下载地址:http://git-scm.com/download/

安装Hexo

1
2
3
4
5
6
7
$ cd d:/hexo
$ npm install hexo-cli -g
$ hexo init blog
$ cd blog
$ npm install
$ hexo g # 或者hexo generate
$ hexo s # 或者hexo server,可以在http://localhost:4000/ 查看

这里有必要提下Hexo常用的几个命令:

  1. hexo generate (hexo g) 生成静态文件,会在当前目录下生成一个新的叫做public的文件夹
  2. hexo server (hexo s) 启动本地web服务,用于博客的预览
  3. hexo deploy (hexo d) 部署播客到远端(比如github, heroku等平台)

另外还有其他几个常用命令:

1
2
$ hexo new "postName" #新建文章
$ hexo new page "pageName" #新建页面

常用简写

1
2
3
4
$ hexo n == hexo new
$ hexo g == hexo generate
$ hexo s == hexo server
$ hexo d == hexo deploy

常用组合

1
2
$ hexo d -g #生成部署
$ hexo s -g #生成预览

现在我们打开http://localhost:4000/ 已经可以看到一篇内置的blog了。

目前我安装所用的本地环境如下:(可以通过hexo -v查看)

hexo: 3.2.0
hexo-cli: 1.0.1
os: Windows_NT 6.3.9600 win32 x64
http_parser: 2.5.2
node: 4.4.1
v8: 4.5.103.35
uv: 1.8.0
zlib: 1.2.8
ares: 1.10.1-DEV
icu: 56.1
modules: 46
openssl: 1.0.2g

Hexo主题设置

这里以主题yilia为例进行说明。

安装主题

1
2
$ hexo clean
$ git clone https://github.com/litten/hexo-theme-yilia.git themes/yilia

启用主题

修改Hexo目录下的_config.yml配置文件中的theme属性,将其设置为yilia。

更新主题

1
2
3
4
$ cd themes/yilia
$ git pull
$ hexo g # 生成
$ hexo s # 启动本地web服务器

现在打开http://localhost:4000/ ,会看到我们已经应用了一个新的主题。

Github Pages设置

什么是Github Pages

GitHub Pages 本用于介绍托管在GitHub的项目,不过,由于他的空间免费稳定,用来做搭建一个博客再好不过了。

每个帐号只能有一个仓库来存放个人主页,而且仓库的名字必须是username/username.github.io,这是特殊的命名约定。你可以通过http://username.github.io 来访问你的个人主页。

这里特别提醒一下,需要注意的个人主页的网站内容是在master分支下的。

创建自己的Github Pages

注册GitHub及使用Github Pages的过程已经有很多文章讲过,在此不再详述,可以参考:

一步步在GitHub上创建博客主页 全系列

如何搭建一个独立博客——简明Github Pages与Hexo教程

在这里我创建了一个github repo叫做 jiji262.github.io. 创建完成之后,需要有一次提交(git commit)操作,然后就可以通过链接http://jiji262.github.io/ 访问了。(现在还没有内容,别着急)

部署Hexo到Github Pages

这一步恐怕是最关键的一步了,让我们把在本地web环境下预览到的博客部署到github上,然后就可以直接通过http://jiji262.github.io/访问了。不过很多教程文章对这个步骤语焉不详,这里着重说下。

首先需要明白所谓部署到github的原理。

  1. 之前步骤中在Github上创建的那个特别的repo(jiji262.github.io)一个最大的特点就是其master中的html静态文件,可以通过链接http://jiji262.github.io来直接访问。
  2. Hexo -g 会生成一个静态网站(第一次会生成一个public目录),这个静态文件可以直接访问。
  3. 需要将hexo生成的静态网站,提交(git commit)到github上。

明白了原理,怎么做自然就清晰了。

使用hexo deploy部署

hexo deploy可以部署到很多平台,具体可以参考这个链接. 如果部署到github,需要在配置文件_config.xml中作如下修改:

1
2
3
4
deploy:
  type: git
  repo: git@github.com:jiji262/jiji262.github.io.git
  branch: master

然后在命令行中执行

1
hexo d

即可完成部署。

注意需要提前安装一个扩展:

1
$ npm install hexo-deployer-git --save

使用git命令行部署

不幸的是,上述命令虽然简单方便,但是偶尔会有莫名其妙的问题出现,因此,我们也可以追本溯源,使用git命令来完成部署的工作。

clone github repo

1
2
3
$ cd d:/hexo/blog

$ git clone https://github.com/jiji262/jiji262.github.io.git .deploy/jiji262.github.io

将我们之前创建的repo克隆到本地,新建一个目录叫做.deploy用于存放克隆的代码。

创建一个deploy脚本文件

1
2
3
4
5
6
hexo generate
cp -R public/* .deploy/jiji262.github.io
cd .deploy/jiji262.github.io
git add .
git commit -m “update”
git push origin master

简单解释一下,hexo generate生成public文件夹下的新内容,然后将其拷贝至jiji262.github.io的git目录下,然后使用git commit命令提交代码到jiji262.github.io这个repo的master branch上。

需要部署的时候,执行这段脚本就可以了(比如可以将其保存为deploy.sh)。执行过程中可能需要让你输入Github账户的用户名及密码,按照提示操作即可。

Hexo 主题配置

每个不同的主题会需要不同的配置,主题配置文件在主题目录下的_config.yml。
以我们使用的yilia主题为例,其提供如下的配置项(theme\yilia_config.yml):

# Header
menu:
  主页: /
  所有文章: /archives
  # 随笔: /tags/随笔

# SubNav
subnav:
  github: "#"
  weibo: "#"
  rss: "#"
  zhihu: "#"
  #douban: "#"
  #mail: "#"
  #facebook: "#"
  #google: "#"
  #twitter: "#"
  #linkedin: "#"

rss: /atom.xml

# Content
excerpt_link: more
fancybox: true
mathjax: true

# Miscellaneous
google_analytics: ''
favicon: /favicon.png

#你的头像url
avatar: ""
#是否开启分享
share: true
#是否开启多说评论,填写你在多说申请的项目名称 duoshuo: duoshuo-key
#若使用disqus,请在博客config文件中填写disqus_shortname,并关闭多说评论
duoshuo: true
#是否开启云标签
tagcloud: true

#是否开启友情链接
#不开启——
#friends: false

#是否开启“关于我”。
#不开启——
#aboutme: false
#开启——
aboutme: 我是谁,我从哪里来,我到哪里去?我就是我,是颜色不一样的吃货…

其他高级使用技巧

绑定独立域名

购买域名
在你的域名注册提供商那里配置DNS解析,获取GitHub的IP地址点击,进入source目录下,添加CNAME文件

1
2
3
4
5
$ cd source/
$ touch CNAME
$ vim CNAME # 输入你的域名
$ git add CNAME
$ git commit -m "add CNAME"

使用图床

使用七牛云存储
自己在github上搭建的图床:http://jiji262.github.io/qiniuimgbed/ ,需要首先注册七牛账号使用。

添加插件

添加sitemap和feed插件

1
2
$ npm install hexo-generator-feed
$ npm install hexo-generator-sitemap

修改_config.yml,增加以下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Extensions
Plugins:
- hexo-generator-feed
- hexo-generator-sitemap

#Feed Atom
feed:
  type: atom
  path: atom.xml
  limit: 20

#sitemap
sitemap:
  path: sitemap.xml

配完之后,就可以访问http://jiji262.github.io/atom.xmlhttp://jiji262.github.io/sitemap.xml,发现这两个文件已经成功生成了。

添加404公益页面

GitHub Pages有提供制作404页面的指引:Custom 404 Pages

直接在根目录下创建自己的404.html或者404.md就可以。但是自定义404页面仅对绑定顶级域名的项目才起作用,GitHub默认分配的二级域名是不起作用的,使用hexo server在本机调试也是不起作用的。

推荐使用腾讯公益404

添加about页面

1
$ hexo new page "about"

之后在\source\about\index.md目录下会生成一个index.md文件,打开输入个人信息即可,如果想要添加版权信息,可以在文件末尾添加:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<div style="font-size:12px;border-bottom: #ddd 1px solid; BORDER-LEFT: #ddd 1px solid; BACKGROUND: #f6f6f6; HEIGHT: 120px; BORDER-TOP: #ddd 1px solid; BORDER-RIGHT: #ddd 1px solid">
<div style="MARGIN-TOP: 10px; FLOAT: left; MARGIN-LEFT: 5px; MARGIN-RIGHT: 10px">
<IMG alt="" src="https://avatars1.githubusercontent.com/u/168751?v=3&s=140" width=90 height=100>
</div>
<div style="LINE-HEIGHT: 200%; MARGIN-TOP: 10px; COLOR: #000000">
本文链接:<a href="<%= post.link %>"><%= post.title %></a> <br/>
作者: 
<a href="http://jiji262.github.io/">令狐葱</a> <br/>出处: 
<a href="http://jiji262.github.io/">http://jiji262.github.io/</a>
<br/>本文基于<a target="_blank" title="Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)" href="http://creativecommons.org/licenses/by-sa/4.0/"> 知识共享署名-相同方式共享 4.0 </a>
国际许可协议发布,欢迎转载,演绎或用于商业目的,但是必须保留本文的署名 
<a href="http://jiji262.github.io/">令狐葱</a>及链接。
</div>
</div>

添加Fork me on Github

获取代码,选择你喜欢的代码添加到hexo/themes/yilia/layout/layout.ejs的末尾即可,注意要将代码里的you改成你的Github账号名。

添加支付宝捐赠按钮及二维码支付

支付宝捐赠按钮

在D:\hexo\themes\yilia\layout_widget目录下新建一个zhifubao.ejs文件,内容如下

1
2
3
4
5
6
7
8
9
10
11
<p class="asidetitle">打赏他</p>
<div>
<form action="https://shenghuo.alipay.com/send/payment/fill.htm" method="POST" target="_blank" accept-charset="GBK">
    <br/>
    <input name="optEmail" type="hidden" value="your 支付宝账号" />
    <input name="payAmount" type="hidden" value="默认捐赠金额(元)" />
    <input id="title" name="title" type="hidden" value="博主,打赏你的!" />
    <input name="memo" type="hidden" value="你Y加油,继续写博客!" />
    <input name="pay" type="image" value="转账" src="http://7xig3q.com1.z0.glb.clouddn.com/alipay-donate-website.png" />
</form>
</div>

添加完该文件之后,要在D:/hexo/themes/yilia/_config.yml文件中启用,如下所示,添加zhifubao

1
2
3
4
5
6
7
widgets:
- category
- tag
- links
- tagcloud
- zhifubao
- rss
二维码捐赠

首先需要到这里获取你的支付宝账户的二维码图片,支付宝提供了自定义功能,可以添加自定义文字。

我的二维码扫描捐赠添加在about页面,当然你也可以添加到其它页面,在D:\hexo\blog\source\about下有index.md,打开,在适当位置添加

1
2
3
4
5
<center>
欢迎您捐赠本站,您的支持是我最大的动力!
![][http://7xsxyo.com1.z0.glb.clouddn.com/2016/04/15/FoJ1F6Ht0CNaYuCdE2l52F-Fk9Vk202.png]
</center>
<br/>

<center>可以让图片居中显示,注意将图片链接地址换成你的即可。

添加百度站内搜索

点击进入,点击其它工具->站内检索->现在使用->新建搜索引擎->查看代码,将代码里的id值复制,打开/d/hexo/themes/jacman/_config.xml,配置成如下即可。

1
2
3
4
baidu_search:     ## http://zn.baidu.com/
  enable: true
  id: "1433674487421172828" ## e.g. "783281470518440642"  for your baidu search id
  site: http://zhannei.baidu.com/cse/search ## your can change to your site instead of the default site

使用不蒜子添加访客统计

详情参考搞定你的网站计数,具体做法很简单,就是在你的themes/your themes/layout/_partial/footer.ejs底部加入这段脚本

1
<script async src="//dn-lbstatics.qbox.me/busuanzi/2.3/busuanzi.pure.mini.js"></script>

然后在<p class="copyright"></p>中间添加如下统计信息即可

1
本站总访问量 <span id="busuanzi_value_site_pv"></span> 次, 访客数 <span id="busuanzi_value_site_uv"></span> 人次, 本文总阅读量 <span id="busuanzi_value_page_pv"></span>

不蒜子的官方服务网站是不蒜子,目前最大的弊端就是不开放注册,所以对于运行了一段时间的网站,不蒜子的数据都是从1开始,没办法设置,只有等后期开放注册之后,登入网站才能对统计计数进行设置。

参考链接

Hexo主页
hexo你的博客
Github Pages个人博客,从Octopress转向Hexo
如何搭建一个独立博客——简明Github Pages与Hexo教程
如何在一天之内搭建以你自己名字为域名又具备cool属性的个人博客
手把手教你建github技术博客by hexo
Markdown 语法说明 (简体中文版)


Viewing all 764 articles
Browse latest View live