Quantcast
Channel: 懒得折腾
Viewing all 764 articles
Browse latest View live

Batch Geonews: TIGER 2013, ArcGIS for WordPress, RTK GPS for $2k, Yosemite Fires

$
0
0

Batch Geonews: TIGER 2013, ArcGIS for WordPress, RTK GPS for $2k, Yosemite Fires, and much more

Tue, 2013/09/03 – 10:18 — Satri

Catching up the August geonews, we’re now all up to date with this way too long entry.

On the open source front:

On the Esri front:

Discussed over Slashdot:

In the miscellaneous category:

In the maps category:



Convert Latitude & Longitude format VBA

$
0
0

Convert a latitude 603225 to a decimal based Latitude in VBA.

Int(Left( [LATITUDE], 2)) + (Int(Right(Left( [LATITUDE], 4), 2))/60.0 )+( Int(Right( [LATITUDE], 2))/3600.0)
Int(Left( [LONGITUDE], 2)) + (Int(Right(Left( [LONGITUDE], 4), 2))/60.0 )+( Int(Right( [LONGITUDE], 2))/3600.0)


So You’d Like To Make a Map Using Python

$
0
0

So You’d Like To Make a Map Using Python

Date Tue 22 October 2013 Tags python / gis

Making thematic maps has traditionally been the preserve of a ‘proper’ GIS, such as ArcGIS or QGIS. While these tools make it easy to work with shapefiles, and expose a range of common everyday GIS operations, they aren’t particularly well-suited to exploratory data analysis. In short, if you need to obtain, reshape, and otherwise wrangle data before you use it to make a map, it’s easier to use a data analysis tool (such as Pandas), and couple it to a plotting library. This tutorial will be demonstrating the use of:

  • Pandas
  • Matplotlib
  • The matplotlib Basemap toolkit, for plotting 2D data on maps
  • Fiona, a Python interface to OGR
  • Shapely, for analyzing and manipulating planar geometric objects
  • Descartes, which turns said geometric objects into matplotlib “patches”
  • PySAL, a spatial analysis library

The approach I’m using here uses an interactive REPL (IPython Notebook) for data exploration and analysis, and the Descartespackage to render individual polygons (in this case, wards in London) as matplotlib patches, before adding them to a matplotlib axes instance. I should stress that many of the plotting operations could be more quickly accomplished, but my aim here is to demonstrate how to precisely control certain operations, in order to achieve e.g. the precise line width, colour, alpha value or label position you want.

Package installation

This tutorial uses Python 2.7.x, and the following non-stdlib packages are required:

  • IPython
  • Pandas
  • Numpy
  • Matplotlib
  • Basemap
  • Shapely
  • Fiona
  • Descartes
  • PySAL

The installation of some of these packages can be onerous, and requires a great number of third-party dependencies (GDAL & OGR, C & FORTRAN77 (yes, really) compilers). If you’re experienced with Python package installation and building software from source, feel free to install these dependencies (if you’re using OSX, Homebrew and/or Kyngchaos are helpful, particularly for GDAL & OGR), before installing the required packages in a virtualenv, and skipping the rest of this section.

For everyone else: Enthought’s Canopy (which is free for academic users) provides almost everything you need, with the exception ofDescartes and PySAL. You can install them into the Canopy User Python quite easily, see this support article for details.

Running the Code

I find IPython Notebook best for this: code can be run in isolation within cells, making it easy to correct mistakes, and graphics are rendered inline, making it easy to fine-tune output. Opening a notebook is straightforward: run ipython notebook --pylab inlinefrom the command line. This will open a new notebook in your browser. Canopy users: the Canopy Editor will do this for you.

Obtaining a basemap

We’re going to be working with basemaps from Esri Shapefiles, and we’re going to plot some data about London on a map of London on a choropleth map. Here’s how to get the basemap, if you’re an academic user:

  1. http://edina.ac.uk/census/
  2. Log in with your university ID
  3. Go to “Quick Access to Boundary Data”
  4. Select “England, “Census boundaries”, “2001 to 2010″
  5. Select “English Census Area Statistic Wards, 2001 [within counties]“
  6. Click “List Areas”
  7. Select “Greater London”
  8. Click “Extract Boundary Data”
  9. Download and unzip the file somewhere, say a data directory

Obtaining some data

We’re going to make three maps, using the same data: blue plaque locations within London. In order to do this, we’re going to extract the longitude, latitude, and some other features from the master XML file from Open Plaques. Get it here. This file contains data for every plaque Open Plaques knows about, but it’s incomplete in some cases, and will require cleanup before we can begin to extract a useful subset.

Extracting and cleaning the data

Let’s start by importing the packages we need. I’ll discuss the significance of certain libraries as needed.

from lxml import etree
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from matplotlib.collections import PatchCollection
import matplotlib.font_manager as fm
from mpl_toolkits.basemap import Basemap
from shapely.geometry import Point, Polygon, MultiPoint, MultiPolygon, shape
from pysal.esda.mapclassify import Natural_Breaks as nb
from descartes import PolygonPatch
import mpl_toolkits.basemap.pyproj as pyproj
import fiona
from itertools import chain

Now, we’re going to extract all the useful XML data into a dict.

tree = etree.parse("data/plaques_20130119.xml")
root = tree.getroot()

output = dict()
output['raw'] = []
output['crs'] = []
output['lon'] = []
output['lat'] = []

for each in root.xpath('/openplaques/plaque/geo'):
    # check what we got back
    output['crs'].append(each.get('reference_system'))
    output['lon'].append(each.get('longitude'))
    output['lat'].append(each.get('latitude'))
    # now go back up to plaque
    r = each.getparent().xpath('inscription/raw')[0]
    if isinstance(r.text, str):
        output['raw'].append(r.text.lstrip().rstrip())
    else:
        output['raw'].append(None)

This will produce a dict containing the coordinate reference system, longitude, latitude, and description of each plaque record. Next, we’re going to create a Pandas DataFrame, drop all records which don’t contain a description, and convert the long and lat values from string to floating-point numbers.

df = pd.DataFrame(output)
df = df.replace({'raw': 0}, None)
df = df.dropna()
df[['lon', 'lat']] = df[['lon', 'lat']].astype(float)

Now, we’re going to open our shapefile, and get some data out of it, in order to set up our basemap. However, Basemap is fussy about the coordinate reference system (CRS) of the shapefile it uses, so we’ll have to convert ours to WGS84 before we can proceed. This is accomplished using the ogr2ogr tool, on the command line:

ogr2ogr -f "ESRI Shapefile" england_caswa_2001_n.shp england_caswa_2001.shp -s_srs EPSG:27700 -t_srs EPSG:4326

If you’re interested in what this does, see here for a little more detail. With the conversion complete, we can get some data from our shapefile:

shp = fiona.open('data/england_caswa_2001_n.shp')
bds = shp.bounds
shp.close()
extra = 0.01
wgs84 = pyproj.Proj("+init=EPSG:4326")
osgb36 = pyproj.Proj("+init=EPSG:27700")
ll = (bds[0], bds[1])
ur = (bds[2], bds[3])
coords = list(chain(ll, ur))
w, h = coords[2] - coords[0], coords[3] - coords[1]

We’ve done two things here:

  1. extracted the map boundaries
  2. Calculated the extent, width and height of our basemap

We’re ready to create a Basemap instance, which we can use to plot our maps on.

m = Basemap(
projection='tmerc',
lon_0 = -2.,
lat_0 = 49.,
ellps = 'WGS84',
llcrnrlon=coords[0] - extra * w,
llcrnrlat=coords[1] - extra + 0.01 * h,
urcrnrlon=coords[2] + extra * w,
urcrnrlat=coords[3] + extra + 0.01 * h,
lat_ts=0,
resolution='i',
suppress_ticks=True)
m.readshapefile(
    'data/england_caswa_2001_n',
    'london',
    color='none',
    zorder=2)

I’ve chosen the transverse mercator projection, because it exhibits less distortion over areas with a small east-west extent. This projection requires us to specify a central longitude and latitude, which I’ve set as -2, 49.

# set up a map dataframe
df_map = pd.DataFrame({
    'poly': [Polygon(xy) for xy in m.london],
    'ward_name': [w['name'] for w in m.london_info],
})
df_map['area_m'] = df_map['poly'].map(lambda x: x.area)
df_map['area_km'] = df_map['area_m'] / 100000

# Create Point objects in map coordinates from dataframe lon and lat values
map_points = pd.Series(
    [Point(m(mapped_x, mapped_y)) for mapped_x, mapped_y in zip(df['lon'], df['lat'])])
plaque_points = MultiPoint(list(map_points.values))
wards_polygon = prep(MultiPolygon(list(df_map['poly'].values)))
# calculate points that fall within the London boundary
ldn_points = filter(wards_polygon.contains, plaque_points)

Our df_map dataframe now contains columns holding:

  • a polygon for each ward in the shapefile
  • its description
  • its area in square metres
  • its area in square kilometres

We’ve also created a series of Shapely Points, which we’ve constructed from our plaques dataframe, and a prepared geometry object from the combined ward polygons. We’ve done this in order to speed up our membership-checking operation significantly. The result is a Pandas series, ldn_points, containing all points which fall within the ward boundaries.

The two functions below make it easier to generate colour bars for our maps. Have a look at the docstrings for more detail – in essence, one of them discretises a colour ramp, and the other labels colour bars more easily.

# Convenience functions for working with colour ramps and bars
def colorbar_index(ncolors, cmap, labels=None, **kwargs):
    """
    This is a convenience function to stop you making off-by-one errors
    Takes a standard colourmap, and discretises it,
    then draws a color bar with correctly aligned labels
    """
    cmap = cmap_discretize(cmap, ncolors)
    mappable = cm.ScalarMappable(cmap=cmap)
    mappable.set_array([])
    mappable.set_clim(-0.5, ncolors+0.5)
    colorbar = plt.colorbar(mappable, **kwargs)
    colorbar.set_ticks(np.linspace(0, ncolors, ncolors))
    colorbar.set_ticklabels(range(ncolors))
    if labels:
        colorbar.set_ticklabels(labels)
    return colorbar

def cmap_discretize(cmap, N):
    """
    Return a discrete colormap from the continuous colormap cmap.

        cmap: colormap instance, eg. cm.jet. 
        N: number of colors.

    Example
        x = resize(arange(100), (5,100))
        djet = cmap_discretize(cm.jet, 5)
        imshow(x, cmap=djet)

    """
    if type(cmap) == str:
        cmap = get_cmap(cmap)
    colors_i = concatenate((linspace(0, 1., N), (0., 0., 0., 0.)))
    colors_rgba = cmap(colors_i)
    indices = linspace(0, 1., N + 1)
    cdict = {}
    for ki,key in enumerate(('red','green','blue')):
        cdict[key] = [(indices[i], colors_rgba[i - 1, ki], colors_rgba[i, ki]) for i in xrange(N + 1)]
    return matplotlib.colors.LinearSegmentedColormap(cmap.name + "_%d" % N, cdict, 1024)

Let’s make a scatter plot

# draw ward patches from polygons
df_map['patches'] = df_map['poly'].map(lambda x: PolygonPatch(
    x, fc='#555555', ec='#787878', lw=.25, alpha=.9
    , zorder=4))

plt.clf()
fig = plt.figure()
ax = fig.add_subplot(111, axisbg='w', frame_on=False)

# we don't need to pass points to m() because we calculated using map_points and shapefile polygons
dev = m.scatter(
    [geom.x for geom in ldn_points],
    [geom.y for geom in ldn_points],
    5, marker='o', lw=.25,
    facecolor='#33ccff', edgecolor='w',
    alpha=0.9, antialiased=True,
    label='Blue Plaque Locations', zorder=3)
# plot boroughs by adding the PatchCollection to the axes instance
ax.add_collection(PatchCollection(df_map['patches'].values, match_original=True))
# copyright and source data info
smallprint = ax.text(
    1.03, 0,
    'Total points: %s\nContains Ordnance Survey data\n$\copyright$ Crown copyright and database right 2013\nPlaque data from http://openplaques.org' % len(ldn_points),
    ha='right', va='bottom',
    size=4,
    color='#555555',
    transform=ax.transAxes,
)

# Draw a map scale
m.drawmapscale(
    coords[0] + 0.08, coords[1] + 0.015,
    coords[0], coords[1],
    10.,
    barstyle='fancy', labelstyle='simple',
    fillcolor1='w', fillcolor2='#555555',
    fontcolor='#555555',
    zorder=5
)
plt.title("Blue Plaque Locations, London")
plt.tight_layout()
# this will set the image width to 722px at 100dpi
fig.set_size_inches(7.22, 5.25)
plt.savefig('data/london_plaques.png', dpi=100, alpha=True)
plt.show()

Scatter Plot

We’ve drawn a scatter plot on our map, containing points with a 50 metre diameter, corresponding to each point in our dataframe.

This is OK as a first step, but doesn’t really tell us anything interesting about the density per ward – merely that there are more plaques found in central London than in the outer wards.

Creating a Choropleth Map, Normalised by Ward Area

# create a MultiPoint which we can check for set membership

df_map['count'] = df_map['poly'].map(lambda x: int(len(filter(prep(x).contains, ldn_points))))
df_map['density_m'] = df_map['count'] / df_map['area_m']
df_map['density_km'] = df_map['count'] / df_map['area_km']
# it's easier to work with NaN values when classifying
df_map.replace(to_replace={'density_m': {0: np.nan}, 'density_km': {0: np.nan}}, inplace=True)

We’ve now created some additional columns, containing the number of points in each ward, and the density per square metre and square kilometre, for each ward. Normalising like this allows us to compare wards.

We’re almost ready to make a choropleth map, but first, we have to divide our wards into classes, in order to easily distinguish them. We’re going to accomplish this using an iterative method called Jenks Natural Breaks.

# Calculate Jenks natural breaks for density
breaks = nb(
    df_map[df_map['density_km'].notnull()].density_km.values,
    initial=300,
    k=5)
# the notnull method lets us match indices when joining
jb = pd.DataFrame({'jenks_bins': breaks.yb}, index=df_map[df_map['density_km'].notnull()].index)
df_map = df_map.join(jb)
df_map.jenks_bins.fillna(-1, inplace=True)

We’ve calculated the classes (five, in this case) for all the wards containing one or more plaques (density_km is not Null), and created a new dataframe containing the class number (0 – 4), with the same index as the non-null density values. This makes it easy to join it to the existing dataframe. The final step involves assigning the bin class -1 to all non-valued rows (wards), in order to create a separate zero-density class.

We also want to create a sensible label for our classes:

jenks_labels = ["<= %0.1f/km$^2$(%s wards)" % (b, w) for b, w in zip(
breaks.bins, breaks.counts)]
jenks_labels.insert(0, 'No plaques (%s wards)' % len(df_map[df_map['density_km'].isnull()]))

This will show density/square km, as well as the number of wards in the class.

We’re now ready to plot our choropleth map:

plt.clf()
fig = plt.figure()
ax = fig.add_subplot(111, axisbg='w', frame_on=False)

# use a blue colour ramp - we'll be converting it to a map using cmap()
cmap = plt.get_cmap('Blues')
# draw wards with grey outlines
df_map['patches'] = df_map['poly'].map(lambda x: PolygonPatch(x, ec='#555555', lw=.2, alpha=1., zorder=4))
pc = PatchCollection(df_map['patches'], match_original=True)
# impose our colour map onto the patch collection
norm = Normalize()
pc.set_facecolor(cmap(norm(df_map['jenks_bins'].values)))
ax.add_collection(pc)

# Add a colour bar
cb = colorbar_index(ncolors=len(jenks_labels), cmap=cmap, shrink=0.5, labels=jenks_labels)
cb.ax.tick_params(labelsize=6)

# Show highest densities, in descending order
highest = '\n'.join(value[1] for
    _, value in df_map[(df_map['jenks_bins'] == 4)][:10].sort().iterrows())
highest = 'Most Dense Wards:\n\n' + highest
# Subtraction is necessary for precise y coordinate alignment
details = cb.ax.text(
    -1., 0 - 0.007,
    highest,
    ha='right', va='bottom',
    size=5,
    color='#555555',
)

# Bin method, copyright and source data info
smallprint = ax.text(
    1.03, 0,
    'Classification method: natural breaks\nContains Ordnance Survey data\n$\copyright$ Crown copyright and database right 2013\nPlaque data from http://openplaques.org',
    ha='right', va='bottom',
    size=4,
    color='#555555',
    transform=ax.transAxes,
)

# Draw a map scale
m.drawmapscale(
    coords[0] + 0.08, coords[1] + 0.015,
    coords[0], coords[1],
    10.,
    barstyle='fancy', labelstyle='simple',
    fillcolor1='w', fillcolor2='#555555',
    fontcolor='#555555',
    zorder=5
)
# this will set the image width to 722px at 100dpi
plt.tight_layout()
fig.set_size_inches(7.22, 5.25)
plt.savefig('data/london_plaques.png', dpi=100, alpha=True)
plt.show()

Choropleth

Finally, we can create an alternative map using hex bins. These are a more informative alternative to point maps, as we shall see. The Basemap package provides a hex-binning method, and we require a few pieces of extra information in order to use it:

  1. the longitude and latitude coordinates of the points which will be used must be provided as numpy arrays.
  2. We have to specify a grid size, in metres. You can experiment with this setting; I’ve chosen 125.
  3. setting the mincnt value to 1 means that no bins will be drawn in areas where there are no plaques within the grid.
  4. You can specify the bin type. In this case, I’ve chosen log, which uses a logarithmic scale for the colour map. This more clearly emphasises minor differences in the densities of each bin.

The code:

# draw ward patches from polygons
df_map['patches'] = df_map['poly'].map(lambda x: PolygonPatch(
    x, fc='#555555', ec='#787878', lw=.25, alpha=.9, zorder=0))

plt.clf()
fig = plt.figure()
ax = fig.add_subplot(111, axisbg='w', frame_on=False)

# plot boroughs by adding the PatchCollection to the axes instance
ax.add_collection(PatchCollection(df_map['patches'].values, match_original=True))

df_london = df[
   (df['lon'] >= ll[0]) &
   (df['lon'] <= ur[0]) &
   (df['lat'] >= ll[1]) &
   (df['lat'] <= ur[1])
]

lon_ldn = df_london.lon.values
lat_ldn = df_london.lat.values

# the mincnt argument only shows cells with a value >= 1
# hexbin wants np arrays, not plain lists
hx = m.hexbin(
    np.array([geom.x for geom in ldn_points]),
    np.array([geom.y for geom in ldn_points]),
    gridsize=125,
    bins='log',
    mincnt=1,
    edgecolor='none',
    alpha=1.,
    lw=0.2,
    cmap=plt.get_cmap('Blues'))

# copyright and source data info
smallprint = ax.text(
    1.03, 0,
    'Total points: %s\nContains Ordnance Survey data\n$\copyright$ Crown copyright and database right 2013\nPlaque data from http://openplaques.org' % len(ldn_points),
    ha='right', va='bottom',
    size=4,
    color='#555555',
    transform=ax.transAxes,
)

# Draw a map scale
m.drawmapscale(
    coords[0] + 0.08, coords[1] + 0.015,
    coords[0], coords[1],
    10.,
    barstyle='fancy', labelstyle='simple',
    fillcolor1='w', fillcolor2='#555555',
    fontcolor='#555555',
    zorder=5
)

plt.title("Blue Plaque Density, London")
plt.tight_layout()
# this will set the image width to 722px at 100dpi
fig.set_size_inches(7.22, 5.25)
plt.savefig('data/london_plaques.png', dpi=100, alpha=True)
plt.show()

Hexbin

In a future post, I’ll be discussing Geopandas


Batch Geonews: Remaining Relevant as a GIS Professional, OpenGeo Suite 4.0, 30TB of Imagery in Esri, and much more

$
0
0

Batch Geonews: Remaining Relevant as a GIS Professional, OpenGeo Suite 4.0, 30TB of Imagery in Esri, and much more

Wed, 2013/11/13 – 08:22 — Satri

Here’s the recent geonews in batch mode, covering a too long timespan once again.

On the open source / open data front:

On the Esri front:

On the Google front:

In the everything-else category:

In the maps category:

Share this post:

Managing the Asynchronous Nature of Node.js

$
0
0

Managing the Asynchronous Nature of Node.js

By ,6 days ago

Node.js allows you to create apps fast and easily. But due to its asynchronous nature, it may be hard to write readable and manageable code. In this article I’ll show you a few tips on how to achieve that.

 

Callback Hell or the Pyramid of Doom

Node.js is built in a way that forces you to use asynchronous functions. That means callbacks, callbacks and even more callbacks. You’ve probably seen or even written yourself pieces of code like this:

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
app.get('/login', function (req, res) {
    sql.query('SELECT 1 FROM users WHERE name = ?;', [ req.param('username') ], function (error, rows) {
        if (error) {
            res.writeHead(500);
            return res.end();
        }
        if (rows.length &lt; 1) {
            res.end('Wrong username!');
        } else {
            sql.query('SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ req.param('username'), req.param('password') ], function (error, rows) {
                if (error) {
                    res.writeHead(500);
                    return res.end();
                }
                if (rows.length &lt; 1) {
                    res.end('Wrong password!');
                } else {
                    sql.query('SELECT * FROM userdata WHERE name = ?;', [ req.param('username') ], function (error, rows) {
                        if (error) {
                            res.writeHead(500);
                            return res.end();
                        }
                        req.session.username = req.param('username');
                        req.session.data = rows[0];
                        res.rediect('/userarea');
                    });
                }
            });
        }
    });
});

This is actually a snippet straight from one of my first Node.js apps. If you’ve done something more advanced in Node.js you probably understand everything, but the problem here is that the code is moving to the right every time you use some asynchronous function. It becomes harder to read and harder to debug. Luckily, there are a few solutions for this mess, so you can pick the right one for your project.

Solution 1: Callback Naming and Modularization

The simplest approach would be to name every callback (which will help you debug the code) and split all of your code into modules. The login example above can be turned into a module in a few simple steps.

The Structure

Let’s start with a simple module structure. To avoid the above situation, when you just split the mess into smaller messes, let’s have it be a class:

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
var util = require('util');
function Login(username, password) {
    function _checkForErrors(error, rows, reason) {
        
    }
    
    function _checkUsername(error, rows) {
        
    }
    
    function _checkPassword(error, rows) {
        
    }
    
    function _getData(error, rows) {
        
    }
    
    function perform() {
        
    }
    
    this.perform = perform;
}
util.inherits(Login, EventEmitter);

The class is constructed with two parameters: username and password. Looking at the sample code, we need three functions: one to check if the username is correct (_checkUsername), another to check the password (_checkPassword) and one more to return the user-related data (_getData) and notify the app that the login was successful. There is also a _checkForErrors helper, which will handle all errors. Finally, there is a perform function, which will start the login procedure (and is the only public function in the class). Finally, we inherit from EventEmitter to simplify the usage of this class.

The Helper

The _checkForErrors function will check if any error occurred or if the SQL query returns no rows, and emit the appropriate error (with the reason that was supplied):

01
02
03
04
05
06
07
08
09
10
11
12
13
function _checkForErrors(error, rows, reason) {
    if (error) {
        this.emit('error', error);
        return true;
    }
    
    if (rows.length &lt; 1) {
        this.emit('failure', reason);
        return true;
    }
    
    return false;
}

It also returns true or false, depending on whether an error occurred or not.

Performing the Login

The perform function will have to do only one operation: perform the first SQL query (to check if the username exists) and assign the appropriate callback:

1
2
3
function perform() {
    sql.query('SELECT 1 FROM users WHERE name = ?;', [ username ], _checkUsername);
}

I assume you have your SQL connection accessible globally in the sql variable (just to simplify, discussing if this is a good practice is beyond the scope of this article). And that’s it for this function.

Checking the Username

The next step is to check if the username is correct, and if so fire the second query – to check the password:

1
2
3
4
5
6
7
function _checkUsername(error, rows) {
    if (_checkForErrors(error, rows, 'username')) {
        return false;
    } else {
        sql.query('SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ username, password ], _checkPassword);
    }
}

Pretty much the same code as in the messy sample, with the exception of error handling.

Checking the Password

This function is almost exactly the same as the previous one, the only difference being the query called:

1
2
3
4
5
6
7
function _checkPassword(error, rows) {
    if (_checkForErrors(error, rows, 'password')) {
        return false;
    } else {
        sql.query('SELECT * FROM userdata WHERE name = ?;', [ username ], _getData);
    }
}

Getting the User-Related Data

The last function in this class will get the data related to the user (the optional step) and fire a success event with it:

1
2
3
4
5
6
7
function _getData(error, rows) {
    if (_checkForErrors(error, rows)) {
        return false;
    } else {
        this.emit('success', rows[0]);
    }
}

Final Touches and Usage

The last thing to do is to export the class. Add this line after all of the code:

1
module.exports = Login;

This will make the Login class the only thing that the module will export. It can be later used like this (assuming that you’ve named the module file login.js and it’s in the same directory as the main script):

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
var Login = require('./login.js');
...
app.get('/login', function (req, res) {
    var login = new Login(req.param('username'), req.param('password));
    login.on('error', function (error) {
        res.writeHead(500);
        res.end();
    });
    login.on('failure', function (reason) {
        if (reason == 'username') {
            res.end('Wrong username!');
        } else if (reason == 'password') {
            res.end('Wrong password!');
        }
    });
    login.on('success', function (data) {
        req.session.username = req.param('username');
        req.session.data = data;
        res.redirect('/userarea');
    });
    login.perform();
});

Here’s a few more lines of code, but the readability of the code has increased, quite noticeably. Also, this solution does not use any external libraries, which makes it perfect if someone new comes to your project.

That was the first approach, let’s proceed to the second one.

Solution 2: Promises

Using promises is another way of solving this problem. A promise (as you can read in the link provided) “represents the eventual value returned from the single completion of an operation”. In practice, it means that you can chain the calls to flatten the pyramid and make the code easier to read.

We will use the Q module, available in the NPM repository.

Q in the Nutshell

Before we start, let me introduce you to the Q. For static classes (modules), we will primarily use the Q.nfcall function. It helps us in the conversion of every function following the Node.js’s callback pattern (where the parameters of the callback are the error and the result) to a promise. It’s used like this:

1
Q.nfcall(http.get, options);

It’s pretty much like Object.prototype.call. You can also use the Q.nfapply which resembles Object.prototype.apply:

1
Q.nfapply(fs.readFile, [ 'filename.txt', 'utf-8' ]);

Also, when we create the promise, we add each step with the then(stepCallback)method, catch the errors with catch(errorCallback) and finish with done().

In this case, since the sql object is an instance, not a static class, we have to useQ.ninvoke or Q.npost, which are similar to the above. The difference is that we pass the methods’ name as a string in the first argument, and the instance of the class that we want to work with as a second one, to avoid the method being unbinded from the instance.

Preparing the Promise

The first thing to do is to execute the first step, using Q.nfcall or Q.nfapply (use the one that you like more, there is no difference underneath):

1
2
3
4
5
6
7
8
var Q = require('q');
...
app.get('/login', function (req, res) {
    Q.ninvoke('query', sql, 'SELECT 1 FROM users WHERE name = ?;', [ req.param('username') ])
});

Notice the lack of a semicolon at the end of the line – the function-calls will be chained so it cannot be there. We are just calling the sql.query as in the messy example, but we omit the callback parameter – it’s handled by the promise.

Checking the Username

Now we can create the callback for the SQL query, it will be almost identical to the one in the “pyramid of doom” example. Add this after the Q.ninvoke call:

1
2
3
4
5
6
7
.then(function (rows) {
    if (rows.length &lt; 1) {
        res.end('Wrong username!');
    } else {
        return Q.ninvoke('query', sql, 'SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ req.param('username'), req.param('password') ]);
    }
})

As you can see we are attaching the callback (the next step) using the then method. Also, in the callback we omit the error parameter, because we will catch all of the errors later. We are manually checking, if the query returned something, and if so we are returning the next promise to be executed (again, no semicolon because of the chaining).

Checking the Password

As with the modularization example, checking the password is almost identical to checking the username. This should go right after the last then call:

1
2
3
4
5
6
7
.then(function (rows) {
    if (rows.length &lt; 1) {
        res.end('Wrong password!');
    } else {
        return Q.ninvoke('query', sql, 'SELECT * FROM userdata WHERE name = ?;', [ req.param('username') ]);
    }
})

Getting the User-Related Data

The last step will be the one where we’re putting the users’ data in the session. Once more, the callback is not much different from the messy example:

1
2
3
4
5
.then(function (rows) {
    req.session.username = req.param('username');
    req.session.data = rows[0];
    res.rediect('/userarea');
})

Checking for Errors

When using promises and the Q library, all of the errors are handled by the callback set using the catch method. Here, we are only sending the HTTP 500 no matter what the error is, like in the examples above:

1
2
3
4
5
.catch(function (error) {
    res.writeHead(500);
    res.end();
})
.done();

After that, we must call the done method to “make sure that, if an error doesn’t get handled before the end, it will get rethrown and reported” (from the library’s README). Now our beautifully flattened code should look like this (and behave just like the messy one):

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
var Q = require('q');
...
app.get('/login', function (req, res) {
    Q.ninvoke('query', sql, 'SELECT 1 FROM users WHERE name = ?;', [ req.param('username') ])
    .then(function (rows) {
        if (rows.length &lt; 1) {
            res.end('Wrong username!');
        } else {
            return Q.ninvoke('query', sql, 'SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ req.param('username'), req.param('password') ]);
        }
    })
    .then(function (rows) {
        if (rows.length &lt; 1) {
            res.end('Wrong password!');
        } else {
            return Q.ninvoke('query', sql, 'SELECT * FROM userdata WHERE name = ?;', [ req.param('username') ]);
        }
    })
    .then(function (rows) {
        req.session.username = req.param('username');
        req.session.data = rows[0];
        res.rediect('/userarea');
    })
    .catch(function (error) {
        res.writeHead(500);
        res.end();
    })
    .done();
});

The code is much cleaner, and it involved less rewriting than the modularization approach.

Solution 3: Step Library

This solution is similar to the previous one, but it’s simpler. Q is a bit heavy, because it implements the whole promises idea. The Step library is there only for the purpose of flattening the callback hell. It’s also a bit simpler to use, because you just call the only function that is exported from the module, pass all your callbacks as the parameters and use this in place of every callback. So the messy example can be converted into this, using the Step module:

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
var step = require('step');
...
app.get('/login', function (req, res) {
    step(
        function start() {
            sql.query('SELECT 1 FROM users WHERE name = ?;', [ req.param('username') ], this);
        },
        function checkUsername(error, rows) {
            if (error) {
                res.writeHead(500);
                return res.end();
            }
            if (rows.length &lt; 1) {
                res.end('Wrong username!');
            } else {
                sql.query('SELECT 1 FROM users WHERE name = ? && password = MD5(?);', [ req.param('username'), req.param('password') ], this);
            }
        },
        function checkPassword(error, rows) {
            if (error) {
                res.writeHead(500);
                return res.end();
            }
            if (rows.length &lt; 1) {
                res.end('Wrong password!');
            } else {
                sql.query('SELECT * FROM userdata WHERE name = ?;', [ req.param('username') ], this);
            }
        },
        function (error, rows) {
            if (error) {
                res.writeHead(500);
                return res.end();
            }
            req.session.username = req.param('username');
            req.session.data = rows[0];
            res.rediect('/userarea');
        }
    );
});

The drawback here is that there is no common error handler. Although any exceptions thrown in one callback are passed to the next one as the first parameter (so the script won’t go down because of the uncaught exception), having one handler for all errors is convenient most of the time.

Which One to Choose?

That’s pretty much a personal choice, but to help you pick the right one, here is a list of pros and cons of each approach:

Modularization:

Pros:

  • No external libraries
  • Helps to make the code more reusable

Cons:

  • More code
  • A lot of rewriting if you’re converting an existing project

Promises (Q):

Pros:

  • Less code
  • Only a little rewriting if applied to an existing project

Cons:

  • You have to use an external library
  • Requires a bit of learning

Step Library:

Pros:

  • Easy to use, no learning required
  • Pretty much copy-and-paste if converting an existing project

Cons:

  • No common error handler
  • A bit harder to indent that step function properly

Conclusion

As you can see, the asynchronous nature of Node.js can be managed and the callback hell can be avoided. I’m personally using the modularization approach, because I like to have my code well structured. I hope these tips will help you to write your code more readable and debug your scripts easier.


Batch Geonews: 2014 Predictions, Near Real-Time Imagery of Earth, Location Privacy, LiDAR Formats, and much more

$
0
0

Batch Geonews: 2014 Predictions, Near Real-Time Imagery of Earth, Location Privacy, LiDAR Formats, and much more

Sun, 2014/01/19 – 13:17 — Satri

The first batch geonews edition of 2014!

On the open source / open data front:

On the Google front:

On the Apple front:

Discussed over Slashdot:

In the everything else category:

In the maps category:


Juan Marin’s Predictions for 2014

$
0
0

Juan Marin’s Predictions for 2014

Posted on 01/08/14 by 

Juan MarinI believe that the geospatial industry has lagged behind the general IT landscape, with concepts like “Big Data” and “Cloud Computing” taking longer to gain a solid foothold. Buzzwords aside, technology has changed a lot but much of the traditional GIS industry has not been aware of these changes. This will be a pivotal year, fueled in part by open source communities that think outside the box, solve real problems, and don’t need to work at the speed of a large proprietary monopoly.

It’s always fun to write about the near future and come back a year later and see how wrong you were in making your predictions! With this in mind, here are my ten predictions for technology trends in the geospatial industry in 2014:

  • Mobile is king. The desktop is in decline as people increasingly access the internet through mobile devices. While some users may still need a powerful workstation, they will soon be outnumbered by casual users that nonetheless require access to complex geographic content and functionality. The geospatial industry has focused on porting desktop concepts instead of creating new paradigms for mobile platforms but it’s time to unchain ourselves from traditional desktop GIS altogether. This is the year that we may see some groundbreaking changes in this space.
  • You will like your data options. In addition to crowd-sourced efforts such as OpenStreetMap, companies like Planet Labs and SkyBox Imaging have the potential to really disrupt the industry by challenging the incumbent data providers with compelling options and innovative products. The key to success for these data providers will be having the shortest time to market possible and providing products that are easy to consume with the minimum amount of effort. Whatever the outcome, the competition is very welcome.
  • Data in real time. Capturing, preparing, and analyzing GIS data has traditionally taken anywhere from minutes to hours or even days but the trend has been towards real-time data feeds— with sensors, the “internet of things” and social media increasingly adopting location as a key facet. As evidenced by our adoption of new database technologies and monitoring tools, we expect this trend to continue.
  • Geo is multiplatform. If your geospatial software is not multiplatform, it will become irrelevant. If we compound the effects of cloud computing (mostly Linux) with mobile devices (Android and iOS), we realize that the Windows operating system is increasingly at a disadvantage in many deployment scenarios and may soon cease to be the dominant platform for the fastest growing segments of the IT industry.
  • Geospatial programming is increasingly polyglot. If you are getting started as a geospatial developer, it’s a safe bet that choosing a scripting language like Python or Javascript will get you very far. The former is probably the most used language in our industry and has even been adopted by proprietary companies as their main scripting language. The latter is especially relevant for web-related technologies and is exploding in the general IT industry. However, it’s important to understand the limitations of these technologies, since solutions based on C, C++, and Java are not going away soon and may even grow thanks to their power and flexibility. Many new languages like Clojure, F#, and Scala that are also pushing the functional paradigm into the mainstream and will offer unique advantages to developers building geospatial applications. There hasn’t been a better time to be a geospatial software developer!
  • Big Data is getting Really Big. As Paul said about LIDAR, sensors and satellites now regularly produce terabytes of information daily and systems regularly collect billions of locations in just a few hours. Systems able to handle this amount of information are distributed by nature and have requirements for both batch processing and online analytical capabilities. This will be a hot R&D topic in our industry for years to come and I believe we will see an acceleration in these types of investments over the next year.
  • Distributed by design. This will probably be the new default architectural design for most server-side infrastructure, especially those that require high volumes of activity. There are many factors, some mentioned above, that will make this among the most important paradigm shifts in how we build solutions: skyrocketing numbers of users, real-time data, “Big Data”, cloud computing, and many more. If you want to dive deeper into these concepts, I suggest reading — and signing — the Reactive Manifesto.  While this shift may take some time, we will see more activity around this area in 2014.
  • Indoor mapping will be hot. Not entirely a new thing, but I believe the next year will see a large increase in solutions around indoor mapping. Mainly in the consumer space — prepare for targeted advertisements! — but perhaps some more useful applications will arise. For instance, I foresee really cool and useful applications for persons with disabilities, in the form of visual and hearing aids that take into account precise data for the surroundings, in real time.
  • Open source solutions will keep growing. In early 2013 there were over 3 million developers and over 5 million repositories on GitHub and there are probably many more now. There are thousands of small open source projects that each solve a specific need and there are also well-established projects with dozens of developers working daily on improving their code bases. This trend has already exploded in the general IT industry, where open source has become the default deployment option for software that we use everyday. The internet is built on top of open source software, and so are most of the cutting edge technologies that have flooded the industry with buzzwords like NoSQL, Big Data, etc. Why should the geospatial industry be different? The obvious answer is that it isn’t.
  • A true geospatial collaboration platform. The geospatial industry has been struggling with effectively collaborating around geographic content for much too long. Some previous efforts have not translated well to the web or use subpar approaches.  While some seem great at first, they end up being too restrictive, limited, or expensive. At Boundless we believe the status quo is unsatisfactory and think that 2014 will bring exciting offerings in this space.

Whatever you might think of these technology predictions, I believe one thing is clear: the need for geospatial information keeps growing and our industry is in a very interesting moment; there has never been a better time to be disruptive, and the opportunities are boundless.

Check back tomorrow for Paul Ramsey’s predictions for the open source geospatial community and don’t forget Eddie’s post on the future of Boundless.

Juan Marin, our CTO, has been developing geospatial applications for over a decade in the energy, environmental, defense, telecommunications and retail industries, among others.


Paul Ramsey’s Predictions for 2014

$
0
0

Paul Ramsey’s Predictions for 2014

Posted on 01/10/14 by 

Paul RamseyTen years ago, when PostGIS was at 0.8 and the world was fresh and new, I was pretty convinced our industry was on the cusp of an open source revolution. When folks got a taste of the new, flexible, free tools for building systems they’d naturally discard their legacy proprietary software and swiftly move on to a more enlightened existence. I felt excitement, and wind in my hair.

Similarly, pretty much every year since 2000 has been heralded by someone, somewhere, breathlessly proclaiming that (finally) “this year will be the year of the Linux desktop”.

A funny thing happened on the way to the open source revolution. It turned out to be more of an open source evolution. In aggregate, change has been slow, incremental, though always in the direction of more open source use.

So in looking forward to what to expect in the new year of open source geospatial, my predictions will have to be a little circumspect — the big things will change slowly, but at the edges there will be a great deal of churn and change:

  • Oracle will notice they are losing customers to PostgreSQL. While MySQL always got all the press as “the open source database” it’s been PostgreSQL that has had the enterprise features from the start to go toe-to-toe with the big guy. As Oracle continues to increase maintenance prices to please Wall Street, customers are beginning to think the unthinkable: maybe it’s time to reevaluate their database standard.
  • The coolest stuff will continue to have open source at the foundations.Whether it be the Linux-running, GDAL-enabled satellites of PlanetLabs or the latest Android phablets, the coolest innovations will stand on the shoulders of open source and reach upwards from there.
  • Most of the open source action will be in JavaScript. Juan mentioned that geospatial programming is increasingly polyglot, but the open source arena with greatest level of churn right now is the JavaScript world, both on the client and the server. There is a lot of sound and fury out there. Some of it signifies nothing, but some of it is laying the foundations for the standards we’ll be using for the next decade. Contemporary JavaScript reminds me of Java circa-2005: multiple projects with similar functional goals, competing design philosophies, and huge potential. Separating the signal from the noise in this kind of environment takes real expertise, so I’m glad we have some of the best and brightest JavaScripters in the geospatial world on our team.
  • PaaS will join open source in the evolution revolution. As I am just starting to learn platform-as-a-service (PaaS), I feel it has both the same promise as open source, and the same long organizational learning curve. As a result, it’ll find its way into core IT only slowly, as experienced folks like me pick it up, and the next generation moves into operational jobs. Since PaaS is open source almost by definition, growth of cloud platforms will also further institutionalize open source components for system building.
  • Iterative, open source style development will gain more ground. The public failure of the healthcare.gov site, and lashing of that failure to waterfall methodology, can only be good for agile development. There’s already lots of agile in enterprises, but it’s still something that “progressive” organizations do, it’s not the default. The more people think about technology in open source ways (it’s a process, not a product; it’s about managing change, not achieving a final state), the better for open source.
  • Organizations will yearn to work with OpenStreetMap, and some will figure out how. While licensing continues to lock out many public organizations from participating, others will make peace and begin integrating OSM into their workflows. The lucky ones will get approval from their lawyers to work with OSM directly. The less lucky will settle for using OSM as a change tracking driver to keep authoritative maps up to date.
  • Boundless will integrate even more open source technology into OpenGeo Suite, making it yet easier to get started with enterprise geospatial systems. OK, that was an easy one since Eddie agrees, but I have to get at least one right.

Have a great new year, from everyone at Boundless!

Paul Ramsey, has been working with geospatial software for over ten years and, in 2008, received the Sol Katz Award for achievement in open source geospatial software.



Introduction to Promises

$
0
0

Introduction to Promises

This guide assumes familiarity with basic JavaScript and should be suitable for both people new to asynchronous programming and those with some experience.

Motivation

We want our code to be asynchronous, because if we write synchronous code then the user interface will lock up (in client side applications) or no requests will get handled (in server applications). One way to solve this problem is threads, but they create their own problems and are not supported in JavaScript.

One of the simplest ways to make functions asynchronous is to accept a callback function. This is what node.js does (at time of writing). This works, but has a number of issues.

  1. You lose the separation of inputs and outputs to a function since the callback must be passed as an input
  2. It is difficult to compose multiple serial or parallel operations
  3. You lose a lot of helpful debugging information and error handling ability relating to stack traces and the bubbling up of exceptions
  4. You can no longer use the built in control flow constructs and they must all be re-invented to work asynchronously.

Many APIs in the browser use some kind of event based model for control flow, which solves problem 1, but not problems 2 to 4.

Promises aim to solve issues 1 to 3 and can solve problem 4 in ES6 (with the use of generators).

Basic Usage

The core idea behind promises is that a promise represents a value that is the result of an asynchronous operation. They may instead turn out to be a thrown error. Asynchronous functions should return promises:

var prom = get('http://www.example.com')

If we request the content of the web page http://www.example.com we will be doing it asynchronously so we get a promise back.

In order to extract the value from that promise, we use .done which queues a function to be executed when the promise is fulfilled with some result.

var prom = get('http://www.example.com')
prom.done(function (content) {
  console.log(content)
})

Note how we’re passing a function that has not been called to .done and it will be called only once, when the promise is fulfilled. We can call .done as many times as we want and as late or early as we want and we will always get the same result. For example, it’s fine to call it after the promise has already been resolved:

var cache = {}
function getCache(url) {
  if (cache[url]) return cache[url]
  else return cache[url] = get(url)
}

var promA = getCache('http://www.example.com')
promA.done(function (content) {
  console.log(content)
})
setTimeout(function () {
  var promB = getCache('http://www.example.com')
  promB.done(function (content) {
    console.log(content)
  })
}, 10000)

Of course, requesting an error page can easilly go wrong, and throw an error. By default, .done just throws that error so it gets logged appropriately and (in environments other than the browser) crashes the application. We often want to attach our own handler instead though:

var prom = get('http://www.example.com')
prom.done(function (content) {
  console.log(content)
}, function (ex) {
  console.error('Requesting www.example.com failed, maybe you should try again?')
  console.error(ex.stack)
})

Transformation

Often you have a promise for one thing and you need to do some work on it to get a promise for another thing. Promises have a .thenmethod that works a bit like .map on an array.

function getJSON(url) {
  return get(url)
    .then(function (res) {
      return JSON.parse(res)
    })
}

getJSON('http://www.example.com/foo.json').done(function (res) {
  console.dir(res)
})

Note how .then handles any errors for us so that they bubble up the stack just like in synchronous code. You can also handle them when you call .then

function getJSON(url) {
  return get(url)
    .then(function (res) {
      return JSON.parse(res)
    }, function (err) {
      if (canRetry(err)) return getJSON(url)
      else throw err
    })
}

getJSON('http://www.example.com/foo.json').done(function (res) {
  console.dir(res)
})

Here, errors thrown by JSON.parse are not handled by the error handler we attached, but some errors we received from calling get are handled with a retry. Note how we can return a promise from .then and it is automatically unwrapped:

var prom = get('http://example.com/url-to-request')
  .then(function (url) {
    return get(url)
  })
  .then(function (res) {
    return JSON.parse(res)
  })
prom.done(function (finalResult) {
  console.dir(finalResult)
  //this is actually the very final result
})

Combination

One advantage of a promise being a value is that you can perform useful operations to combine promises. One such operation that most libraries support is all:

var a = get('http://www.example.com')
var b = get('http://www.example.co.uk')
var both = Promise.all([a, b])
both.done(function (res) {
  var a = res[0]
  var b = res[1]
  console.dir({
    '.com': a,
    '.co.uk': b
  })
})

This is extremely useful if you need to run lots of operations in parallel. The idea also extends to large, unbounded arrays of values:

function readFiles(files) {
  return Promise.all(files.map(function (name) {
    return readFile(name)
  }))
}
readFiles(['fileA.txt', 'fileB.txt', 'fileC.txt']).done(function (filesContents) {
  console.dir(filesContents)
})

Of course, serial operations can be composed just using .then

get('http://www.example.com').then(function (res) {
  console.log('.com')
  console.dir(res)
  return get('http://www.example.co.uk')
}).done(function (res) {
  console.log('.co.uk')
  console.dir(res)
})

And with a little imagination you can use this technique to handle arrays as well:

function readFiles(files) {
  var result = []

  // create an initial promise that is already fulfilled with null
  var ready = Promise.from(null)

  files.forEach(function (name) {
    ready = ready.then(function () {
      return readFile(name)
    }).then(function (content) {
      result.push(content)
    })
  })

  return ready.then(function () {
    return result
  })
}
readFiles(['fileA.txt', 'fileB.txt', 'fileC.txt']).done(function (filesContents) {
  console.dir(filesContents)
})

Implementations / Downloads

There are a large number of Promises/A+ compatible implementations out there, not all of which have .done methods or Promise.allmethods. You should feel free to use whichever implementation best fits in with your needs. Here are the two I would recommend.

Promise

Promise, by Forbes Lindesay, is a very simple, high performance promise library. It is designed to just provide the bare bones required to use promises in the wild.

If you use node.js or browserify you can install it using npm:

npm install promise

and then load it using require:

var Promise = require('promise')

If you are using any other module system or just want it directly in the browser, you can download a version with a standalone module defenition from here (with UMD support) or add a script tag directly:

<script src="http://www.promisejs.org/implementations/promise/promise-3.2.0.js"></script>

Once installed, you can create a new promise using:

var myPromise = new Promise(function (resolve, reject) {
  // call resolve(value) to fulfill the promise with that value
  // call reject(error) if something goes wrong
})

Full documentation can be found at https://github.com/then/promise

Q

Q, by Kris Kowal, is an advanced, fully featured promise library. It is designed to be fully featured and has lots of helper methods to make certain common tasks easier. It is somewhat slower than Promise, but can make up for this with support for better stack traces and additional features.

If you use node.js or browserify you can install it using npm:

npm install q

and then load it using require:

var Q = require('q')

If you are using any other module system or just want it directly in the browser, you can download a version with a standalone module defenition from here (with UMD support) or add a script tag directly:

<script src="http://www.promisejs.org/implementations/q/q-0.9.6.js"></script>

Once installed, you can create a new promise using:

var myPromise = Q.promise(function (resolve, reject) {
  // call resolve(value) to fulfill the promise with that value
  // call reject(error) if something goes wrong
})

Full documentation can be found at https://github.com/kriskowal/q

Other

You can find more implementations here


Pure Swift + ArcGIS Runtime for iOS

$
0
0
Add the appropriate module.moduledef file to the ArcGIS Runtime SDK for iOS to allow it to work as an Import for Swift.

 

Pure Swift + ArcGIS Runtime for iOS

To avoid using a bridging header (e.g. if you have a swift-only project), you must first set up the ArcGIS Framework to declare a Module for itself.

Create a Modules/module.modulemap file in the ArcGIS Runtime SDK for iOS ArcGIS.framework (usually installed at ~/Library/SDKs/ArcGIS/iOS/ArcGIS.framework). You will then be able to use the ArcGIS Runtime SDK for iOS in a Swift-only project.

Any .swift file that makes use of the framework will need an import ArcGIS statement.

Either run the bash script, or manually create the file with the contents of module.modulemap from this Gist.

Note: Despite this coming from Apple, as with all things Swift at this time, this is entirely unsupported by Esri :)

123456
framework module ArcGIS {
umbrella header “ArcGIS.h”
export *
module * { export * }
}
1234
#!/bin/bash
mkdir -p ~/Library/SDKs/ArcGIS/iOS/ArcGIS.framework/Modules/
echo ZnJhbWV3b3JrIG1vZHVsZSBBcmNHSVMgewogIHVtYnJlbGxhIGhlYWRlciAiQXJjR0lTLmgiCgogIGV4cG9ydCAqCiAgbW9kdWxlICogeyBleHBvcnQgKiB9Cn0K | base64 -D -o ~/Library/SDKs/ArcGIS/iOS/ArcGIS.framework/Modules/module.modulemap

Learn Swift

$
0
0
A curated list of helpful resources to learn Swift. Tutorials, Code Samples, References and more!

Join the Forums to learn the latest and contribute back!


 Beginner

 Intermediate

 Advanced

 References

 Code Examples

 Code Libraries


GNUstep on Ubuntu 14.04

$
0
0

GNUstep on Ubuntu 14.04


From: Stephen Schaub
Subject: GNUstep on Ubuntu 14.04
Date: Wed, 21 May 2014 12:15:49 -0700 (PDT)
User-agent: G2/1.0

I am new to Objective-C and GNUstep, and have spent the last few days trying to 
get an environment set up with support for blocks and ARC.

Ubuntu 14.04 has several versions of clang packaged for it in the standard 
repositories, so I thought that would be the way to go, since building clang 
from source requires a lot of time and disk space. 

The script here was very helpful in helping me get a working setup:

http://wiki.gnustep.org/index.php/GNUstep_under_Ubuntu_Linux

However, when I tried to follow it on a stock Ubuntu 14.04 system, I ran into 
trouble. I think my difficulty stemmed from the order the script builds things. 
The post here indicates that GNUstep make should be installed before libobjc2, 
and then reinstalled again afterwards:

http://brilliantobjc.blogspot.kr/2012/12/cocoa-on-freebsd.html

So, here is my script for installing Clang and GNUstep on Ubuntu 14.04. I'm 
hoping it will save someone some time.

# inspired by http://wiki.gnustep.org/index.php/GNUstep_under_Ubuntu_Linux

#install build tools and gnustep prereqs
sudo apt-get -y install clang-3.5 git subversion ninja cmake
sudo apt-get -y install  libffi-dev libxml2-dev libgnutls-dev libicu-dev
sudo apt-get -y install libblocksruntime-dev libkqueue-dev 
libpthread-workqueue-dev autoconf libtool

cd ~/Downloads

#download source
git clone git://github.com/nickhutchinson/libdispatch.git
svn co http://svn.gna.org/svn/gnustep/modules/core
svn co http://svn.gna.org/svn/gnustep/libs/libobjc2/trunk libobjc2

export CC=clang 
export CXX=clang++

#install GNUstep make
cd ~/Downloads/core/make
./configure --enable-debug-by-default --with-layout=gnustep 
--enable-objc-nonfragile-abi 
make -j8
sudo -E make install
. /usr/GNUstep/System/Library/Makefiles/GNUstep.sh

#install libobjc2
cd ~/Downloads/libobjc2
mkdir build
cd build
cmake .. 
make 
sudo make install

#reinstall GNUstep make after libobjc2
cd ~/Downloads/core/make
./configure --enable-debug-by-default --with-layout=gnustep 
--enable-objc-nonfragile-abi 
make -j8
sudo -E make install

#add GNUstep config to shell config
echo ". /usr/GNUstep/System/Library/Makefiles/GNUstep.sh" >> ~/.bashrc
source ~/.bashrc

#install gnustep-base
cd ~/Downloads/core/base/
./configure
make -j8
sudo -E make install

#install Grand Central Dispatch
cd ~/Downloads/libdispatch
sh autogen.sh
./configure CFLAGS="-I/usr/include/kqueue" LDFLAGS="-lkqueue 
-lpthread_workqueue -pthread -lm"
make -j8 
sudo -E make install
sudo ldconfig

#install GUI prerequisites
sudo apt-get install -y libjpeg-dev libtiff-dev 
sudo apt-get install -y libcairo-dev libx11-dev:i386 libxt-dev

#install gnustep gui components
cd ~/Downloads/core/gui
./configure
make -j8
sudo -E make install

cd ~/Downloads/core/back
./configure
make -j8
sudo -E make install

Learning Objective-C on Windows with GNUstep and Eclipse

$
0
0

http://fijiaaron.wordpress.com/2013/01/18/learning-objective-c-on-windows-with-gnustep-and-eclipse/

Learning Objective-C on Windows with GNUstep and Eclipse

Writing iOS or Cocoa apps does require a Mac with XCode, but you can learn Objective-C (and work on libraries and command-line apps) on Microsoft Windows.

GCC includes an Objective-C compiler. You can install GCC on Windows via Cygwin or MinGW.

You can also get the GNUStep implementation of the OpenStep libraries which includes the same core libraries as Cocoa — including Foundation which contains NSString, etc.

GNUstep includes a graphical environment GWorkspace which is a clone of the NeXT workspace manager. I haven’t gotten GWorkspace (or Backbone and alternative) to work on Windows or any other graphical apps, but you can use graphical GNUstep apps on Linux.

What this means is that the core libraries and command line apps can work on Windows — which is good for learning basic Objective-C concepts.

In order to use GNUstep on Windows install the GNUstep MSYS System — which gives you the core unix-like tools you know and love.

Next install GNUStep Core. You should be able to open a Shell window and execute commands.

Then install GNUStep Devel which will give you GCC with Objective-C and the GNUStep libraries which allow development.

Gorm is an interface builder for GNUStep which allows you to create (admittedly ugly) GNUstep user interfaces via drag and drop — but I couldn’t get it to work on Windows.

Once you have MSYS, GNUStep Core, and GNUStep Devel, launch the shell from your Windows Start menu.

gnustep start menu

Using a text editor, write a hello world app:

1
2
3
4
5
6
7
8
#import <Foundation/Foundation.h>
int main(int argc, const char *argv[])
{
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
    NSLog (@"Objective-C with GNUstep on Windows");
    return 0;
}

Voila, you’re coding in Objective-C on Windows with GCC!

You can see that your managing memory the old fashioned way. You don’t have ARC, but you can get modern features using clang which is the compiler used by XCode.

Directions for installing clang on Windows are available here:

http://solarianprogrammer.com/2012/03/21/clang-objective-c-windows/

Basically:

1. Checkout LLVM
2. Checkout clang into the llvm/tools folder
3. Edit llvm/tools/clang/lib/Frontend/InitHeaderSearch.cpp

Look for // FIXME: temporary hack: hard-coded paths. (on line 223 currently)


// FIXME: temporary hack: hard-coded paths.
AddPath("C:\\GNUstep\\include", System, true, false, false);
AddPath("C:\\GNUstep\\msys\\1.0\include", System, true, false, false);
AddPath("C:\\GNUstep\\lib\\gcc\\mingw32\\4.6.1\\include", System, true, false, false);
AddPath("C:\\GNUstep\\lib\\gcc\\mingw32\\4.6.1\\include\\c++", System, true, false, false);
AddPath("C:\\GNUstep\\GNUstep\\System\\Library\\Headers", System, true, false, false);
//AddPath("/usr/local/include", System, true, false, false);
break;

4. Compile

cd llvm && mkdir build && cd build
../configure --enable-optimized --enable-targets=host-only
make && make install

An error is expected and not a problem groff: command not found

Now you can use ARC in your Objective-C on GNUstep compiled with clang:

1
2
3
4
5
6
7
8
9
10
// test.m
#include <Foundation/Foundation.h>
int main(){
    @autoreleasepool{
        NSLog( @"Objective-C with ARC on Windows");
    }
    return 0;
}

Here’s a sample GNUmakefile:

1
2
3
4
5
6
include $(GNUSTEP_MAKEFILES)/common.make
TOOL_NAME = test
test_OBJC_FILES = test.m
include $(GNUSTEP_MAKEFILES)/tool.make

Now run:

make CC=clang

Now, using a plain text editor (even Sublime Text) for writing code isn’t always fun..But you can use Eclipse.

Install the Eclipse CDT (C/C++ Development tools) and follow the directions here to set up Eclipse for using Objective-C:

http://wirecode.blogspot.com/2007/11/objective-c-and-eclipse.html

I haven’t yet gotten Objective-C to compile with Eclipse CD% yet but I’ll post more when I’ve got it working.


How to fix virtualbox’s copy and paste to host-machine?

$
0
0

If your guest OS is ubuntu then running following two commands in ubuntu terminal should help:

$ killall VBoxClient
$ VBoxClient-all

Introduction to Swiftris

$
0
0
Introduction to Swiftris

Today you will begin putting the pieces together for a brand new game – see what we did there? To many, Swiftris resembles not only in name but in nearly every other respect a game written in the early 1980s that to this day continues to be played all around the world. Rest assured, Bloc is absolutely certain that any semblance to said game is merely coincidental.

In all seriousness, this is a Tetris clone written in Swift for the iOS platform. This Bloc Book is meant for educational purposes only and we do not recommend releasing your version of Swiftris to the App Store. If you do release Swiftris anyway, hope that you never cross paths with Alexey Pajitnov. As you can see, he’s a very dangerous man.

Alexey PajitnovBefore we start playing with blocks, you should know the tools we’ll be using: Swift, SpriteKit and Xcode.

Swift

Swift is Apple’s latest programming language. In time it will replace Objective-C as the primary language in which iPhone and Macintosh applications are written. Swiftris is written entirely in Swift and this book will present a wide variety of the language’s capabilities.

If you are not a programmer, do not worry. Regardless of skill level, you will have your very own copy of Swiftris after completing this guide. However, this book does not intend to teach you the language in its entirety. Several aspects of it will be covered in brief and supplemented by external resources.

SpriteKit

SpriteKit is a set of APIs provided by the iOS SDK (software development kit) which allow native 2D game development from within Xcode. Swiftris is powered by SpriteKit and therefore no additional libraries or 3rd party tools will be required to build this great game.

Xcode

As of the writing of this Bloc Book, Xcode 6 Beta 5 is the latest version required for Swift compilation. To download Xcode beta versions, you’ll need to register as an Apple Developer.

Once you’ve registered, download Xcode.

While it’s not required for this book, you may want to consider signing up for the iOS Developer Program. We think the $99 annual fee is wholly worthwhile: it provides access to yet-to-be public software updates and allows you to publish apps to your iPhone and the App Store.

Once you have Xcode downloaded and installed, you’re ready to move to the next chapter.

Creating a New Game Project

We’ll need to create a new project to build Swiftris in. An Xcode project organizes everything your app needs into one convenient place. Let’s begin by creating a brand new game project in Xcode by doing either of these two things:

  • Click Create a new Xcode project from the Welcome screen:

Or

  • Select File > New > Project… from the file menu:

When the new project window appears, choose Game from under the iOS > Application category and press Next.

The next window beckons you to customize options for your project. Fill out the fields described in the table below:

Option Value
Product Name Swiftris
Organization Name Bloc
Organization Identifier Bloc.io
Language Swift
Game Technology SpriteKit
Devices iPhone

Press Next and Xcode will ask where to place your new project. Choose a magical, wonderful directory, make sure Create Git repository is checked, and then click Create.

After saving it should open Xcode to your brand new Swiftris project, specifically to the project properties screen. On this screen, check off the Portrait option under Device Orientation:. This file is automatically saved so you won’t have to do anything further.

Result

Run the default game project by pressing ⌘ + R on your keyboard or by clicking the little play button in the top left corner. If a simulator isn’t present, Xcode will download one for you before launching the app.

Congratulations, you’re infinitely closer to a completed Swiftris game than you were 10 minutes ago. That’s a big deal.

Adding Assets

Don’t get me wrong, Spin-The-Bottle: Space Edition was a great game. However, you didn’t start this Bloc Book to make that. At least I hope not, if so, please stop now because your quest is over. For those of you still interested in building Swiftris, we must unceremoniously delete every unnecessary file provided to us by Xcode.

Open Project Navigator by either clicking the designated icon or pressing ⌘ + 1:

Right-click GameScene.sks and choose the Delete option:

When asked to confirm, make sure to choose Move to trash:

To get rid of the aimless space ship once and for all, click the Images.xcassets folder and highlight the Spaceship entry, press the delete key to delete that sucker.

Trimming The Fat

Having slaughtered those files which are of no use, we must now purge our project of any and all code which we do not require. There’s no need to have lingering source code designed to support inept space pilots. Delete all of the lines marked in red within their corresponding files:

GameScene.swift
import SpriteKit

class GameScene: SKScene {
     override func didMoveToView(view: SKView) {
         /* Setup your scene here */
         let myLabel = SKLabelNode(fontNamed:"Chalkduster")
         myLabel.text = "Hello, World!";
         myLabel.fontSize = 65;
         myLabel.position = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame));
 
         self.addChild(myLabel)
     }

     override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
         /* Called when a touch begins */
 
         for touch: AnyObject in touches {
             let location = touch.locationInNode(self)
 
             let sprite = SKSpriteNode(imageNamed:"Spaceship")
 
             sprite.xScale = 0.5
             sprite.yScale = 0.5
             sprite.position = location
 
             let action = SKAction.rotateByAngle(CGFloat(M_PI), duration:1)
 
             sprite.runAction(SKAction.repeatActionForever(action))
 
             self.addChild(sprite)
         }
     }

    override func update(currentTime: CFTimeInterval) {
        /* Called before each frame is rendered */
    }
}

That was a lot, but there’s more:

GameViewController.swift
import UIKit
import SpriteKit

 extension SKNode {
    class func unarchiveFromFile(file : NSString) -> SKNode? {
 
         let path = NSBundle.mainBundle().pathForResource(file, ofType: "sks")
 
         var sceneData = NSData.dataWithContentsOfFile(path, options: .DataReadingMappedIfSafe, error: nil)
         var archiver = NSKeyedUnarchiver(forReadingWithData: sceneData)
 
         archiver.setClass(self.classForKeyedUnarchiver(), forClassName: "SKScene")
         let scene = archiver.decodeObjectForKey(NSKeyedArchiveRootObjectKey) as GameScene
         archiver.finishDecoding()
         return scene
     }
 }

class GameViewController: UIViewController {

    override func viewDidLoad() {
        super.viewDidLoad()

         if let scene = GameScene.unarchiveFromFile("GameScene") as? GameScene {
             // Configure the view.
             let skView = self.view as SKView
             skView.showsFPS = true
             skView.showsNodeCount = true

             /* Sprite Kit applies additional optimizations to improve rendering performance */
             skView.ignoresSiblingOrder = true

             /* Set the scale mode to scale to fit the window */
             scene.scaleMode = .AspectFill

             skView.presentScene(scene)
         }
    }

     override func shouldAutorotate() -> Bool {
         return true
     }

     override func supportedInterfaceOrientations() -> Int {
         if UIDevice.currentDevice().userInterfaceIdiom == .Phone {
             return Int(UIInterfaceOrientationMask.AllButUpsideDown.toRaw())
         } else {
             return Int(UIInterfaceOrientationMask.All.toRaw())
         }
     }

     override func didReceiveMemoryWarning() {
         super.didReceiveMemoryWarning()
         // Release any cached data, images, etc that aren't in use.
     }

    override func prefersStatusBarHidden() -> Bool {
        return true
    }
}

The Sights And Sounds Of Swiftris

In order to experience Swiftris in all its visual and auditory glory, we’re going to need images and sounds, respectively. Download the necessary assets to your Desktop or Downloads folder, anywhere other than the Swiftris project directory. Unzip the archive and perform a drag-and-drop of the Sounds folder into the Project Navigator immediately above the Supporting Files directory. The following window should appear:

Make sure to check the Copy items if necessary option. This will place a copy of the directory and all of the sound files within it into your Swiftris project and directory. Click Finish. Repeat this task with theSprites.atlas folder. Next, select all of the images found within the Images directory and drag them into the Supporting Files folder found in Project Navigator. Once again, make sure that the Copy items if needed checkbox is checked. Finally, click on Images.xcassets to open the window and highlight AppIcon. Drag and drop the appropriate icon file from the downloaded “Blocs” folder into its respective slot: 29pt, 40pt and 60pt.

All this dragging and dropping has my clickin’ hand beat, let’s just code already…

Start At The Back

Let’s put those new background images to work. We’ll begin by establishing GameScene inside of GameViewController. GameScene will be responsible for displaying everything for Swiftris – it will render the tetrominos on screen, the background, and the game board. Furthermore, GameScene will be responsible for playing the sounds and keeping track of the time.

GameViewController, on the other hand, will be responsible for handling user input and communicating between GameScene and a game logic class you’ll write soon.

If you’re working with Swift for the first time, we highly encourage you to type each line by hand in order to get a feel for the language in your fingers… it sounds dirty but it’s good for you.

GameScene.swift
 required init(coder aDecoder: NSCoder!) {
     fatalError("NSCoder not supported")
 }

 override init(size: CGSize) {
     super.init(size: size)

     anchorPoint = CGPoint(x: 0, y: 1.0)

     let background = SKSpriteNode(imageNamed: "background")
     background.position = CGPoint(x: 0, y: 0)
     background.anchorPoint = CGPoint(x: 0, y: 1.0)
     addChild(background)
 }

SpriteKit is based on OpenGL and therefore its coordinate system is opposite to iOS’ native cocoa coordinates. 0, 0 in SpriteKit is the bottom-left corner. Swiftris will be drawn from the top down so therefore we anchor our game in the top-left corner of the screen: 0, 1.0. We then create an SKSpriteNode capable of representing our background image and we add it to the scene.

background is the variable’s name, its type is inferred to be that of SKSpriteNode and the keyword let indicates that it can not be re-assigned. let is akin to Java’s final.

GameViewController.swift
 var scene: GameScene!

override func viewDidLoad() {
    super.viewDidLoad()

     // Configure the view.
     let skView = view as SKView
     skView.multipleTouchEnabled = false

     // Create and configure the scene.
     scene = GameScene(size: skView.bounds.size)
     scene.scaleMode = .AspectFill

     // Present the scene.
     skView.presentScene(scene)
}

In GameViewController we’ve added a member variable, scene. Its declaration: var scene: GameScene! lets us know that it is a variable, its name is scene, its type is GameScene and it is a non-optional value which will eventually be instantiated. Swift typically enforces instantiation either in-line where you declare the variable or during the initializer, init…. In order to circumvent this requirement we’ve added an ! after the type.

In viewDidLoad() we assign scene as promised, using the initializer we had just written moments ago. We tell it to fill the screen and then ask our view to present that scene to the user. Run Swiftris and you should see a super cool background appear. Not titillating enough for you? Read on to continue the fun.

https://www.bloc.io/tutorials/swiftris-build-your-first-ios-game-with-swift#!/chapters/678



Building Location Based Apps with Heroku PostGIS

$
0
0

Smartphones have changed the world – everyone has a device in their pocket that’s constantly connected to the internet and knows where you are. Combined with the rise of digital mapping it has become commonplace to build applications that use GIS (Geographical Information Systems) to digitally represent our physical reality and our location in it. Storing and manipulating geospatial data has become an essential part of application development. If you are building a mobile app it’s becoming table stakes that you take advantage of location.

Today we’re releasing PostGIS 2.0 into public beta as an extension to Heroku Postgres. Now all Heroku Postgres customers will be able to store and manipulate geospatial data as part of their Postgres database. PostGIS 2.0 capabilities are now available in all production tier plans at no additional charge—allowing you to add powerful location functionality to your application.

PostGIS 2.0 will enable a new class of Heroku applications that leverage location data. Whether you are looking to compute walkability scores to nearby schools, target ads based on GPS locations, or search for apartments by specific neighborhoods PostGIS can help make you build richer functionality into your application more easily.

PostGIS now follows the standard extension formatwithin Heroku Postgres. Installing PostGIS is as simple as create extension postgis on any new Postgres 9.2 database crane and above. This means that you can continue starting small with your application, grow functionality, then enable PostGIS at any time to begin taking advantage of it.

Get Started

You can get started with PostGIS 2.0 today by provisioning a database then enabling the extension; or read more about what PostGIS provides.

To provision your database:

heroku addons:add heroku-postgresql:crane

Once provisioned you’ll want to connect to it and enable the extension:

$ heroku pg:psql
create extension postgis;

Your geospatial database is now enabled and ready to use.

More about PostGIS

PostGIS is an extension, adding support for geographic objects and working with them, within PostgreSQL. Similar to PostgreSQL itself, PostGIS is open source. Since Heroku Postgres runs unmodified from the main branch you always use standard technology. The technology is flexible and there is no technology lock-in – you can take data in and out at any time.

PostGIS has grown over several years with a large community behind it now supporting a variety of new operators, specialized types, and a long list of functions for interacting with spatial data.

Adding Location to App

While SQL and specifically PostgreSQL can perform basic algebra this method quickly hits limitations when it comes to more complex location searching. Understandably there’s value in providing your users with richer functionality such as searching by neighborhood, by radius of proximity, or by routes versus just direct distance. At PyCon 2013 Julia Grace talked about how some developers use various math tricks to compute distances or you can take an easier approach by usingPostGIS.

By using PostGIS whether natively or through Rails with ActiveRecord, Django with GeoDjango, Hibernate you can add a variety of rich functionality around location and geographic data very quickly, including:

To begin taking advantage of the GIS functionality available within your Heroku Postgres database read more in DevCenter on getting it setup with ActiveRecord for Rails or within GeoDjango.

Summary

Heroku Postgres is increasingly enabling rich use cases – adding services fromkey/value datatype in hstore, querying across postgres databases with dblink, and now adding rich geospatial functionality. Adding PostGIS within your Postgres database reduces the number of services you need to add to your stack, reducing complexity and allowing you to build in location-based functionality into apps faster.

Get started integrating location into your apps today by provisioning your Heroku Postgres database and exploring the functionality of PostGIS 2.0.

Tags: postgres

12 Impressive JavaScript & HTML5 Presentation Frameworks

$
0
0

12 Impressive JavaScript & HTML5 Presentation Frameworks

As the website development is growing exponentially at a rapid pace and more people are making it their professional career in order to make a much more contribution in developing great websites. These great website may differ from different business prospectives and there are various sectors where one can make its contribution. With the fastest change or growth in the field of website development new techniques are also introduced by the various developers for doing their daily task much easily and in a well defined manner. If we look from the business point of view then there are several types of businesses who strongly need to implement presentations on their websites.

Presentation is basically the way of representing the organizational visual details to create an overall impression. For this purpose there are different languages like Javascript, HTML5, which helps to embed these features into the web pages. But doing this task manually is really time consuming, to make the things better web developers provides a flexible way to use it within the web pages and that is with the help of frameworks. Even WordPress which is best open source platform also provides the flexibility to use presentations within web pages.

With the help of these frameworks and libraries making presentations inside the web pages become much more convenient and easy task. There are many JavaScript/HTML5 presentation frameworks available on the web which helps create modern layouts of web pages with presentation features. These JavaScript and HTML5 presentation frameworks are the easiest way to create presentations for the modern browsers and makes web development much better for developers. Let’s have a look at our today’s compilation of 12 JavaScript/HTML5 presentation frameworks and  Don’t forgot share your experience with us that which framework or libraries you like the most and why?

1) Presentation Framework –  Deck.js

Deck js is one of the most impressive and advanced HTML Presentation Framework with new and stunning features and functionality to show your slide.

html-presentation-framework

2) Presentation Framework –  Tacion.js

Tacion js, this is the jQuery Mobile framework to help you build a real time presentation.

tacion-javascript-presentation-premework

3) Presentation Framework – Fathom.js

This presentation framework create slideshow in HTML with CSS style and control it with some jQuery powered javascript.

fathomjs-javascript-presentation-framework

4) Presentation Framework – Impress.js

Impress js, this is most impressive HTML/javascript presentation framework with attractive interface and creative functionality.

impressjs

5) Presentation Framework – Reveal.js

Reveal js, is a HTML presentation framework provides modern slides that help to create unique slideshow.

reveal-presentation-framework

6) Presentation Framework –  Presenteer.js

 

html5-presentation-framework

7) Presentation Framework –  Jmpress.js

jmpress is also knows a very impressive presentation framework.

jimpressjs-presentation-framework

8) Presentation Framework –  DZ Slides

Create your presentation with these new techniques HTML5 and CSS3.

dzslides

9) Presentation Framework –  slides

You can create a better slideshow/presentation with HTML.

slide-presentation-framework

10) Presentation Framework –  Slides Google Code

 

google-developer-framework

11) Presentation Framework  – Perkele.js

 

perkele

12) Presentation Framework –  HTML Slidy

 

html-slidy


Building a Simple Geodata Service With Node, PostGIS, and Amazon

$
0
0

Building a Simple Geodata Service With Node, PostGIS, and Amazon

DEC 11TH, 2013 | COMMENTS

tl;dr

This post describes the construction of a simple, lightweight geospatial data service using Node.JS, PostGIS and Amazon RDS. It is somewhat lengthy and includes a number of code snippets. The post is primarily targeted at users who may be interested in alternative strategies for publishing geospatial data but may not be familiar with the tools discussed here. This effort is ongoing and follow-up posts can be expected.

Rationale

I’m always looking for opportunities to experiment with new tools and the announcement of PostgreSQL/PostGIS support on Amazon RDS piqued my curiosity. Over the past six months, I have run into the repeated need on a couple of projects to be able to get the bounding box of various polygon features in order to drive dynamic mapping displays. Additionally, the required spatial references of these projects have varied beyond WGS84 and Web Mercator.

With that, the seeds of a geodata service were born. I decided to build one that would, via a simple HTTP call, return the bounding box of a polygon or the polygon itself, in the spatial reference of my choice as a single GeoJSON feature.

I knew I wanted to use PostGIS hosted on Amazon RDS to store my data. Here are the rest of the building blocks for this particular application:

  1. Node.js
  2. Express web application framework for Node
  3. PG module for accessing PostgreSQL with Node
  4. Natural Earth 1:10M country boundaries

Setting up PostGIS on Amazon RDS

Setting up the PostgreSQL instance on RDS was very easy. I simply followed the instructions here for doing it in the AWS Management Console. I also got a lot of use out of this post by Josh Berkus. Don’t forget to also set up your security group to govern access to your database instance as described here. I prefer to grant access to specific IP addresses.

Now that the Amazon configuration is done, your RDS instance essentially behaves the same as if you had set it up on a server in your server room. You can now access the instance using all of the standard PostgreSQL tools with which you are familiar. This is good because we need to do at least one more thing before we load our spatial data: we have to enable the PostGIS extension. I find that it is easiest to accomplish this at the command line:

psql -U {username} -h {really long amazon instance host name} {database name}

Once you’ve connected, issue the command to enable PostGIS in your database:

CREATE EXTENSION postgis;

You may also want to enable topology while you’re here:

CREATE EXTENSION postgis_topology;

This should complete your setup. Now you are ready to load data.

Loading Spatial Data

As I mentioned above, we are now dealing with a standard PostgreSQL server that happens to be running on Amazon RDS. You can use whatever workflow you prefer to load your spatial data.

I downloaded the Natural Earth 1:10M country polygons for this effort. Once downloaded, I used the DB Manager extension to QGIS to import the data to PostgreSQL. I also did a test import with OGR. Both worked fine so it’s really a matter of preference.

Building the Application

I chose to use Node.js because it is very lightweight and ideal for building targeted web applications. I decided to use the Express web framework for Node, mainly because it makes things very easy. To access PostgreSQL, I used the node-postgres module. I was planning to deploy the application in an Ubuntu instance on Amazon EC2, so I chose to do the development on Ubuntu. Theoretically, that shouldn’t matter with Node but the node-postgres module builds a native library when it is installed so it was a factor here.

After building the package.json file and using that to install the Express, node-postgres, and their dependencies, I build a quick server script to act as the web interface for the application. This is where Express really excels in that it makes it easy to define resource paths in an application.

server.js
1
2
3
4
5
6
7
8
9
10
11
12
var express = require('express'),
    geo = require('./routes/geo');

var app = express();

app.get('/countries/:id/bbox', geo.bbox);
app.get('/countries/:id/bbox/:srid', geo.bboxSrid);
app.get('/countries/:id/polygon', geo.polygon);
app.get('/countries/:id/polygon/:srid', geo.polygonSrid);

app.listen(3000);
console.log('Listening on port 3000...');

The four “app.get” statements above define calls to get either the bounding box or the actual polygon for a country. When the “:srid” parameter is not specified, the resulting feature is returned in the default spatial reference of WGS84. If a valid EPSG spatial reference code is supplied, then the resulting feature is transformed to that spatial reference. The “:id” parameter in all of the calls represents the ISO Alpha-3 code for a country. You will notice that the application listens on port 3000. More on that later.

The next step is to define the route handlers. In this application, this where interaction with PostGIS will take place. Note that each of the exports correspond to the callback functions in the app.get statements above.

geo.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
var pg = require('pg');
var conString = "postgres://username:password@hostname.rds.amazonaws.com:5432/database"; //TODO: point to RDS instance

exports.bbox = function(req, res) {
    var client = new pg.Client(conString);
    client.connect();
    var crsobj = {"type": "name","properties": {"name": "urn:ogc:def:crs:EPSG:6.3:4326"}};
    var idformat = "'" + req.params.id + "'";
    idformat = idformat.toUpperCase();
    var query = client.query("select st_asgeojson(st_envelope(shape)) as geojson from ne_countries where iso_a3 = " + idformat + ";");
    var retval = "no data";
    query.on('row', function(result) {
      client.end();
        if (!result) {
          return res.send('No data found');
        } else {
          res.setHeader('Content-Type', 'application/json');
        //build a GeoJSON feature and return it
          res.send({type: "feature",crs: crsobj, geometry: JSON.parse(result.geojson), properties:{"iso": req.params.id, "representation": "extent"}});
        }
      });

};

exports.bboxSrid = function(req, res) {
    var client = new pg.Client(conString);
    client.connect();
    var crsobj = {"type": "name","properties": {"name": "urn:ogc:def:crs:EPSG:6.3:" + req.params.srid}};
    var idformat = "'" + req.params.id + "'";
    idformat = idformat.toUpperCase();
    var query = client.query("select st_asgeojson(st_envelope(st_transform(shape, " + req.params.srid + "))) as geojson from ne_countries where iso_a3 = " + idformat + ";");
    var retval = "no data";
    query.on('row', function(result) {
      client.end();
        if (!result) {
          return res.send('No data found');
        } else {
          res.setHeader('Content-Type', 'application/json');
          res.send({type: "feature",crs: crsobj, geometry: JSON.parse(result.geojson), properties:{"iso": req.params.id, "representation": "extent"}});
        }
      });
};

exports.polygon = function(req, res) {
    //TODO: Flesh this out. Logic will be similar to bounding box.
    var client = new pg.Client(conString);
    client.connect();
    var crsobj = {"type": "name","properties": {"name": "urn:ogc:def:crs:EPSG:6.3:4326"}};
    var idformat = "'" + req.params.id + "'";
    idformat = idformat.toUpperCase();
    var query = client.query("select st_asgeojson(shape) as geojson from ne_countries where iso_a3 = " + idformat + ";");
    var retval = "no data";
    query.on('row', function(result) {
      client.end();
        if (!result) {
          return res.send('No data found');
        } else {
          res.setHeader('Content-Type', 'application/json');
          res.send({type: "feature", crs: crsobj, geometry: JSON.parse(result.geojson), properties:{"iso": req.params.id, "representation": "boundary"}});
        }
      }); };

exports.polygonSrid = function(req, res) {
    var client = new pg.Client(conString);
    client.connect();
    var crsobj = {"type": "name","properties": {"name": "urn:ogc:def:crs:EPSG:6.3:" + req.params.srid}};
    var idformat = "'" + req.params.id + "'";
    idformat = idformat.toUpperCase();
    var query = client.query("select st_asgeojson(st_transform(shape, " + req.params.srid + ")) as geojson from ne_countries where iso_a3 = " + idformat + ";");
    var retval = "no data";
    query.on('row', function(result) {
      client.end();
        if (!result) {
          return res.send('No data found');
        } else {
          res.setHeader('Content-Type', 'application/json');
          res.send({type: "feature",crs: crsobj, geometry: JSON.parse(result.geojson), properties:{"iso": req.params.id, "representation": "boundary"}});
        }
      }); };

The PostGIS spatial SQL for each function is shown in the “client.query” calls in the code above. This approach is very similar to constructing SQL calls in a number of other application environments. You will notice that a coordinate reference system object is constructed and attached to each valid response, which is structured as a GeoJSON feature. The code currently assumes EPSG codes but that may be addressed in a future version.

The above modules do most of the heavy lifting. The full code for this sample is available here.

To test the application, simply issue the following command in a terminal:

node server.js (this assumes you are running from the same directory in which server.js resides. The file extension is optional.

Your web application is now listening on port 3000. In a browser, visit the following URL:

http://localhost:3000/countries/irl/bbox

This should return a GeoJSON feature representing the bounding box of Ireland in WGS84. You can then test the other three calls with appropriate parameters. To get the bounding box in Web Mercator, go to:

http://localhost:3000/countries/irl/bbox/3785

Deploying the Application

The application should now be ready to deploy. In my case, I created an Ubuntu EC2 instance (free tier). Using SSH, I made sure Node and git were installed on the machine. Additionally, I installed Forever which allows a Node application to run continuously (similar to a service on Windows). This can also be done using an upstart script but I chose Forever. I may switch to using PM2 at some point.

Now, it’s simply matter of installing the application code to the instance via git, wget, or the method of your choice. Once installed, be sure to go to the folder containing the code and issue the “npm install” command. This will read the package.json install Express, node-postgres, and other dependencies. Since some native code is built in the process, you may need to issue the command under sudo.

I mentioned above that the application listens on port 3000. The Ubuntu instance, by default, will not allow the application to listen on port 80. This can be mitigated in a number of ways but I issued the following command to redirect traffic from 80 to 3000. Since this instance is single-use, this approach is sufficient.

sudo iptables -t nat -A PREROUTING -p tcp –dport 80 -j REDIRECT –to-ports 3000

Once you are ready to go, you’ll want to start the application with the following command:

forever start server (again assuming you are executing from the directory containing server.js)

A couple of Amazon notes: 1) You may want to assign an elastic IP to your instance for a persistent IP address and 2) you’ll want you remember to configure your RDS security group to allow access from your instance’s IP address.

Conclusion

If everything has gone correctly, you should be able to execute the above URLs (using your instance IP address) and get a response like the following, which you should be able to load directly into QGIS or another GeoJSON-literate client. Altogether, I was able to assemble this in one evening. This small collection of open-source tools, combined with the Amazon infrastructure, seems to provide a straightforward path to a hosted geodata service. This example is intentionally simple but PostGIS provides a robust collection of functions that can be exploited in a similar manner, leading to more advanced processing or analysis. I will continue my experimentation but am encouraged by what I have seen so far.

Sample Response

irl_bbox.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
{
  "type": "feature",
  "crs": {
    "type": "name",
    "properties": {
      "name": "urn:ogc:def:crs:EPSG:6.3:4326"
    }
  },
  "geometry": {
    "type": "Polygon",
    "coordinates": [
      [
        [
          -10.4781794909999,
          51.4457054710001
        ],
        [
          -10.4781794909999,
          55.386379299
        ],
        [
          -5.99351966099994,
          55.386379299
        ],
        [
          -5.99351966099994,
          51.4457054710001
        ],
        [
          -10.4781794909999,
          51.4457054710001
        ]
      ]
    ]
  },
  "properties": {
    "iso": "irl",
    "representation": "extent"
  }
}

Realtime Maps With Meteor and Leaflet

$
0
0

Realtime Maps With Meteor and Leaflet – Part One

DEC 27TH, 2013 | COMMENTS

this ‘map’ is actually a static image

The parties example bundled with Meteor is a nifty demonstration of the framework’s core principles, but it uses a 500 x 500 pixel image of downtown San Francisco as a faux map. This means that we cannot pan or zoom the “map,” and when we double-click the image to create new parties, the circle markers are drawn at the position of the clicks in relation to the image element in the browser window, and not at geospatial coordinates.

circles drawn over the static image

I decided to update the example to use Leaflet.jsto make a real map that looked and felt as close to the original example as possible. In particular, I wanted to preserve the color-coded circles (red for private, blue for public parties) labeled with the number of RSVPs, and the larger animated circle indicating which party is currently selected, with its details displayed in a section outside the map. This is a useful pattern for displaying individual marker details without using a popup that occludes part of the map.

Here is the end result with source code. In the next two posts, I will go over the changes I made to the original example. I won’t be covering how Meteor works, and will assume you have some understanding of how the parties example works as well.

Setting the Stage

First off, I created the example and added leaflet to the project using Meteorite.

1
2
3
4
5
6
$ meteor create --example parties

$ cd parties

$ mrt add leaflet
leaflet: Leaflet.js, mobile-friendly interactive maps....

I then edited the page template to use Bootstrap’s fluid classes to generate a responsive page layoutand added a window.resize() handler to adjust the map’s size as the browser is resized. I use this pattern when creating responsive Leaflet maps, and it’s not specific to Meteor.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<div class="container-fluid">
  <div class="row-fluid">
    <div class="span4">
      {{> details}}
      {{#if currentUser}}
      <div class="pagination-centered">
        <em><small>Double click the map to post a party!</small></em>
      </div>
      {{/if}}
    </div>
    <div class="span8">
        {{> map}}
    </div>
  </div>
</div>
1
2
3
4
5
$(window).resize(function () {
  var h = $(window).height(), offsetTop = 90; // Calculate the top offset
  $mc = $('#map_canvas');
  $mc.css('height', (h - offsetTop));
}).resize();

Map Initialization

Stamen Design’s toner themed map tiles make a nice replacement for the black & white map image in the example. I disabled double-click and touch zoom when initializing the map since those actions are how users create new parties, and I increased tile opacity to lighten the overall background and improve the visibility of markers on the map. Leaflet initialization code goes into the map template’s rendered() callback.

1
2
3
4
5
6
map = L.map($('#map_canvas'), {
  doubleClickZoom: false,
  touchZoom: false
}).setView(new L.LatLng(41.8781136, -87.66677956445312), 13);

L.tileLayer('http://{s}.tile.stamen.com/toner/{z}/{x}/{y}.png', {opacity: .5}).addTo(map);

The next significant change was to replace the map template’s event handler from the original example with Leaflet’s "dblclick" event handler to manage the creation of new parties. The Leaflet version conveniently returns a LatLng which I saved to a Session variable before triggering createDialog. The mechanism to trigger dialogs by setting the associated Session variables Session.showCreateDialog and Session.showInviteDialog is unchanged from the original example, and it works because Meteor Session variables are reactive.

1
2
3
4
5
6
7
map.on("dblclick", function(e) {
  if (! Meteor.userId()) // must be logged in to create parties
    return;

  Session.set("createCoords", e.latlng);
  Session.set("showCreateDialog", true);
});
1
2
3
4
5
6
7
<template name="page">
  {{#if showCreateDialog}}
    {{> createDialog}}
  {{/if}}
  ...
  ...
</template>

Creating and Saving a Party to the Database

This part of the application is also more or less unchanged from the original example except that I passed the party’s LatLng (instead of click position) along with other details from the createDialog template to the Meteor.methods() call to createParty. If the callback is successful, the new party’s _id is saved to another reactive Session variable Session.selected, which drives the details template on the left.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var title = template.find(".title").value;
var description = template.find(".description").value;
var public = ! template.find(".private").checked;
var latlng = Session.get("createCoords");

Meteor.call('createParty', {
  title: title,
  description: description,
  latlng: latlng,
  public: public
}, function (error, partyId) {
  if (! error) { //party was successfully added to the server's mongo collection
    Session.set("selected", partyId);
    ...
  }
});

Adding Markers to the Map in Realtime

As soon as a new party is added to the Parties mongo collection on the server, behind the scenes, Meteor transmits it back to a client-side minimongo collection with the same name on all connected and authorized clients. This can be verified by typing Parties.findOne() into the JavaScript console. This is well and good, but the next task is to replace the D3 code to draw circles from the original example with code to add Leaflet markers to the map.

To do that, I hooked up a cursor.observe() added() callback to create the map marker and I added a click handler to the marker to update the Session.selected variable with the party’s _id. As users click on different parties, this reactively triggers the context for the details template on the left. I also saved a reference to the marker in a local markers hash to efficiently access the marker for future changes. Since we only need to set this up once, I put this code into the maptemplate’s created() callback.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
var map, markers = {};

Template.map.created = function() {
  Parties.find({}).observe({
    added: function(party) {
      var marker = new L.Marker(party.latlng, {
        _id: party._id,
        icon: createIcon(party)
      }).on('click', function(e) {
        Session.set("selected", e.target.options._id);
      });
      map.addLayer(marker);
      markers[marker.options._id] = marker;
    },
    ...
    ...
  });
}

The final bit of fanciness here is my createIcon() helper function to create a lightweight DivIconthat uses a simple div element instead of an image icon. I used CSS border-radius to style the div as a circle of the appropriate color and set CSS line-height to the height of the div to vertically center the text. The attending() helper function from the original example returns the number of Yes RSVPs.

1
2
3
4
5
6
7
8
9
var createIcon = function(party) {
  var className = 'leaflet-div-icon ';
  className += party.public ? 'public' : 'private';
  return L.divIcon({
    iconSize: [30, 30], // set size to 30px x 30px
    html: '<b>' + attending(party) + '</b>',
    className: className
  });
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
.leaflet-div-icon {
  border-radius: 50%;
  border: none;
  line-height: 30px;
  font-family: verdana;
  text-align: center;
  color: white;
  opacity: .8;
  vertical-align: middle;
}

.leaflet-div-icon.public {
  background: #49AFCD;
}

.leaflet-div-icon.private {
  background: #DA4F49;
}

Now I can log in and create a few parties, and they all show up as markers with the appropriate color and label. When I click on a marker, its details are automatically rendered into the details template on the left. But there’s no visual indication on the map as to which party is currently selected — I just need to remember which marker I clicked on last! As it turns out, this usability quirk is easy to address.

 

Realtime Maps With Meteor and Leaflet – Part Two

DEC 28TH, 2013 | COMMENTS

this is a Leaflet map with DivIcon markers

Recap

In the last post, I initialized a Leaflet map to work with Stamen Design’s toner themed map tiles and Bootstrap’s responsive layout. I then set up a double-click event handler to gather additional details about the new party, and hooked up the dialog’s save button to pass those details to a Meteor.methods() call to save the party into a server-side mongo collection. Finally, I hooked up a cursor.observe() added() callback to the client-side minimongo collection and set up the callback to automatically add a circular DivIcon marker at the specified coordinates.

Updating Party Details in the Database

A party document looks something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
  _id: "22dQwpajD64LCv4QW",
  title: "1871",
  description: "Party like it's 1871!",
  latlng: {
    lat: 41.88298161317542,
    lng:  -87.63811111450194
  },
  public: false,
  owner: "52xdsNjprquesL2tQ",
  invited: ["52xdsNjprquesL2tQ", "ci7bzkJCpH9R7HCZK", "5qhRdKFcsmPnxZKBr"]
  rsvps: [
    {
      rsvp: "yes",
      user: "52xdsNjprquesL2tQ"
    },
    {
      rsvp: "maybe",
      user: "ci7bzkJCpH9R7HCZK"
    }
  ]
}

Each party contains an array of RSVP objects, which must be updated when any user adds or updates their RSVP to the party. In addition, private parties contain a set of invited users’ ids; the party owner can invite additional users at any time. So rsvps and invited are the two mutable party attributes in our example. The owner, title, description, coordinates or public/private setting cannot be changed, but a party’s owner can delete the party if no user is RSVPd as Yes.

The code to update and delete parties in the server-side mongo collection is virtually unchanged from the original. The invite() and rsvp() template event handlers are hooked to Meteor.methods()calls that perform the necessary checks before updating the mongo collection on the server. As usual, behind the scenes, Meteor synchronizes the client-side minimongo collection with the server collection.

Updating and Removing Map Markers in Realtime

I hooked up the cursor.observe() changed() callback to update the party’s icon, and removed()callback to delete the marker from the map and the local markers hash.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
var map, markers = {};

Template.map.created = function() {
  Parties.find({}).observe({
    added: function(party) {/* see previous post */},
    changed: function(party) {
      var marker = markers[party._id];
      if (marker) marker.setIcon(createIcon(party));
    },
    removed: function(party) {
      var marker = markers[party._id];
      if (map.hasLayer(marker)) {
        map.removeLayer(marker);
        delete markers[party._id];
      }
    }
  });
}

Using a Halo Marker to Indicate Which Party Is Selected

a selected party

Up to this point, there’s been no visual indication on the map as to which party is currently selected. Like in the original Parties example, I solved this by creating a 50px x 50px transparent grey circular marker and making it concentric with the currently selected party’s marker such that it formed a 20px halo around the selected party. The halo marker is purely a UI artefact that does not need to be saved on the server.

1
2
3
4
L.divIcon({
  iconSize: [50, 50], // set to 50px x 50px
  className: 'leaflet-animated-div-icon'
}
1
2
3
4
5
6
.leaflet-animated-div-icon {
  border-radius: 50%;
  border: none;
  opacity: .2;
  background: black;
}

Animating the Halo Marker

For a final flourish, I used the AnimatedMarker Leaflet plugin from OpenPlans to animate the halo’s movement on the map when a user selects different parties rather than simply making it reappear at a different location. AnimatedMarker takes a Leaflet polyline object as the first argument to its initialize function, and draws a marker at the beginning of the polyline, which it then animates along the polyline at a speed (in meters/ms) that’s configurable via a second argument.

I needed to make a minor tweak to the plugin’s source code to support my needs: AnimatedMarkerdoes not allow setting the animation polyline after the marker is initialized. In other words, it requires the animation path to be known before creating the marker. I wanted to create the marker around the currently selected party without knowledge of it’s future animation path, and to set the animation path dynamically as soon as a user selected a different marker — the path would be a segment from the current location to the center of the selected marker. To accomplish this, all I needed to do was reset the animation index in the marker’s setLine method. This modification is available at my fork on github.

And ta-da! This is the end result: http://www.chicago-parties.meteor.com with source code for the complete application. You need to log in with a github account to create or RSVP to parties.


ReactiveCocoa 2.x With Swift

$
0
0

ReactiveCocoa 2.x With Swift

JUL 2ND, 2014

I recently wrote a blog post on the ShinobiControls blog about using ReactiveCocoa with a ShinobiChart. It’s great – you should go and read it. I was also invited to give a talk at #bristech around the same time, and thought that this blog post would make a really interesting topic. The audience at #bristech is not an iOS audience. Not even mobile-focused. It’s very much a mixed discipline event, with a heavy focus on javascript (lowest common denominator etc.). Therefore I decided a general talk on functional reactive programming, with ReactiveCocoa examples would be a great place to go.

One of the things non-Cocoa developers complain about is the somewhat alien appearance of objective-C. Now, I don’t really think this is a valid complaint, but in the interests of making my talk more accessible, I decided that if the examples I gave were in Swift then fewer people would be frightened off.

And so begins the great-swiftening. I took the original project which accompanied the previous blog post, and swiftified it. There were a few things I thought might be useful to share. This post is the combination of those thoughts.

Bridging Headers

Bridging headers are part of the machinery which enables interaction between swift and objective-C. They’re well-documented as part of Apple’s interoperability guide. Essentially, there is a special header inside your project (specified with a build setting) into which the objective-C headers for the classes you wish to use with Swift should be collected.

The ReactiveWikiMonitor project uses 3 objective-C libraries:

  • ShinobiCharts
  • SocketRocket
  • ReactiveCocoa

Therefore, the bridging header looks like this:

1
2
3
#import <ShinobiCharts/ShinobiChart.h>
#import <SocketRocket/SRWebSocket.h>
#import <ReactiveCocoa/ReactiveCocoa.h>

It’s actually that easy! I love how simple interoperability is at this level. However, if you try and compile this (with your Podfile created correctly and pods installed) then you’ll run in to some problems within the ReactiveCocoa source.

Compiling ReactiveCocoa in a Swift Project

If you try to build a project now, then the compiler will first attempt to compile your pods – including ReactiveCocoa. Do it. You’ll see that it doesn’t work – you get a compiler error around the methods and, or and not on RACSignal+Operations. This is because of a compiler bug, which will hopefully be fixed in a future release, but until then we can work around it by renaming those methods in the ReactiveCocoa source.

Find the RACSignal+Operations.h file in the CocoaPods project, and rename the aforementioned methods to rac_and, rac_or & rac_not. You’ll have to repeat this in the related implementation (.m) file as well. You can then find all the places that use these methods, by attempting a build (there are only about three places in the RAC source). Fixing each call by changing its name will work. Note that it might also be possible to do this using Xcode’s refactor tools, but I’ve not had the most success in the past.

Now your project will build, yay!

Using generics to improve syntax

One of the things I like about objective-C is the implicit casting available in the arguments to blocks. By this I mean the following is the signature for a map function in RAC (defined on RACStream):

1
- (instancetype)map:(id (^)(id value))block;

Which means that when creating a map stage in your pipeline, it would look like this:

1
2
3
map:^id(id *value) {
     return value[@"content"];
 }]

The block returns an id, and takes an id for the value parameter. This is so that in objective-C you can build a functional pipeline which can process any datatypes (since generics don’t exist). However, the syntax allows you to specify (and therefore implicitly cast) these parameters, by defining your block like this:

1
2
3
map:^NSString*(NSDictionary *value) {
     return value[@"content"];
 }]

Although not strictly necessary (since the compiler will allow you to call any methods on an id), it just allows you to have additional type checking at compile (and writing) time.

And now we move our attention to the world of Swift. The Swift equivalent to id is AnyObject, so the map function now looks like this:

1
2
3
.map({ (value: AnyObject!) -> AnyObject in
  return value["content"]
})

If you attempt to build this code then (as of beta2) the compiler will crash. In order to make this work you might think that the following would work:

1
2
3
.map({ (value: NSDictionary!) -> NSString in
  return value["content"]
})

However, Swift’s type system doesn’t like this (with a somewhat cryptic and misplaced error message). Therefore you need to explicitly cast:

1
2
3
4
5
6
.map({ (value: AnyObject!) -> AnyObject in
  if let dict = value as? NSDictionary {
    return dict["content"]
  }
  return ""
})

You have to do this every time you want to call a map function, which in my opinion is a little bit clumsy.

Which brings us to Swift’s generic system, and type inference.

A generic version of map

The syntax I’d like to use is:

1
2
3
.mapAs({ (dict: NSDictionary) -> NSString in
  return dict["content"] as NSString
})

So how do we go about building this mapAs() extension method. Well, extending a class in Swift is easy:

1
2
3
4
5
extension RACStream {
  func myNewMethod() {
      println("My new method")
  }
}

We’re going to create a generic mapAs() method, which includes the explicit downcasting and the call to the underlying map() method:

1
2
3
4
5
6
7
8
func mapAs<T,U: AnyObject>(block: (T) -> U) -> Self {
  return map({(value: AnyObject!) in
    if let casted = value as? T {
      return block(casted)
    }
    return nil
  })
}

This specifies that the mapAs method has 2 generic params – the input and output, and that there is a requirement that the output be of type AnyObject. The closure we pass to the mapAs() method takes the first generic type and returns the second.

All the mapAs() method does is call the underlying map() method, but performs the downcasting as appropriate.

We can write a similar method for filter:

1
2
3
4
5
6
7
8
func filterAs<T>(block: (T) -> Bool) -> Self {
  return filter({(value: AnyObject!) in
    if let casted = value as? T {
      return block(casted)
    }
    return false
  })
}

This obviously can be extended to all the methods on RACStream, RACSignaletc.

I find that using these generic methods (combined with Swift’s type inference), leads to a much more expressive pipeline:

1
2
3
4
5
6
7
8
9
10
11
wsConnector.messages
  .filterAs({ (dict: NSDictionary) in
      return (dict["type"] as NSString).isEqualToString("unspecified")
    })
  .mapAs({ (dict: NSDictionary) -> NSString in
    return dict["content"] as NSString
    })
  .deliverOn(RACScheduler.mainThreadScheduler())
  .subscribeNextAs({(value: NSString) in
    self.tickerLabel.text = value
    })

Conclusion

This is very much an interim piece of work. We can expect RAC3 to be swift-focused, and so these techniques won’t be required. However, they don’t just apply to RAC. Using generics to simplify block arguments is especially helpful when interfacing with objective-C which uses id as a type.

As ever, the code for this is available on the ‘swiftify’ branch of the ReactiveShinobi project on my github. If you don’t fancy having to fiddle with the ReactiveCocoa source once you’ve pulled it down, there’s also a swiftify_with_pods branch, which includes the source code changes.

sam

Jul 2nd, 2014 ios, swift


Viewing all 764 articles
Browse latest View live