Magic words:
psql -U postgres
Most \d
commands support additional param of __schema__.name__
and accept wildcards like *.*
\q
: Quit/Exit\c __database__
: Connect to a database\d __table__
: Show table definition including triggers
# at the R REPL prompt | |
devtools::install_github('IRkernel/repr') | |
devtools::install_github("IRkernel/IRdisplay") | |
devtools::install_github("IRkernel/IRKernel") | |
require(IRdisplay) |
/** | |
* Solves the n-Queen puzzle in O(n!) | |
* Let p[r] be the column of the queen on the rth row (must be exactly 1 queen per row) | |
* There also must be exactly 1 queen per column and hence p must be a permuation of (0 until i) | |
* There must be n distinct (col + diag) and n distinct (col - diag) for each queen (else bishop attacks) | |
* @return a Vector p of length n such that p[i] is the column of the queen on the ith row | |
*/ | |
def nQueens(n: Int) = (0 until n).permutations filter {p => | |
p.zipWithIndex.flatMap{case (c, d) => Seq(n + c + d, c - d)}.distinct.size == 2*n | |
} |
def xtab(*cols, apply_wt=False): | |
''' | |
returns: | |
(i) xt, NumPy array storing the xtab results, number of dimensions is equal to | |
the len(args) passed in | |
(ii) unique_vals_all_cols, a tuple of 1D NumPy array for each dimension | |
in xt (for a 2D xtab, the tuple comprises the row and column headers) | |
pass in: | |
(i) 1 or more 1D NumPy arrays of integers | |
(ii) if wts is True, then the last array in cols is an array of weights |
require(zoo) | |
require(xts) | |
#---------------------- mock an irregular (non-uniform interval) time series -----------------------# | |
start <- Sys.time() - (20 * 60 * 60) | |
end <- Sys.time() | |
# create the time series index | |
idx_ts = seq.POSIXt(start, end, by='hours') |
/** | |
* returns a transformed list (of strings) of the | |
* 1-to-100 integer list passed in such that: | |
* integers evenly divisible by 3 are replaced by "fizz" | |
* integers evenly divisible by 5 are replaced by "buzz" | |
* integers evenly divisible by both 3 & 5 are replaced by "fizzbuz" | |
* all other integers are replaced by their string representation | |
* */ | |
def f[Int](q:List[Int]=(1 to 100).toList):List[String] = { | |
fb.foldLeft(List[String]())( (u, v) => |
Magic words:
psql -U postgres
Most \d
commands support additional param of __schema__.name__
and accept wildcards like *.*
\q
: Quit/Exit\c __database__
: Connect to a database\d __table__
: Show table definition including triggers#!/bin/bash | |
#------------------------------------------------------------------------------ | |
# Name: sbtmkdirs | |
# Purpose: Create an SBT project directory structure with a few simple options. | |
# Author: Alvin Alexander, http://alvinalexander.com | |
# Info: http://alvinalexander.com/sbtmkdirs | |
# License: Creative Commons Attribution-ShareAlike 2.5 Generic | |
# http://creativecommons.org/licenses/by-sa/2.5/ | |
#------------------------------------------------------------------------------ |
# Bulk convert shapefiles to geojson using ogr2ogr | |
# For more information, see http://ben.balter.com/2013/06/26/how-to-convert-shapefiles-to-geojson-for-use-on-github/ | |
# Note: Assumes you're in a folder with one or more zip files containing shape files | |
# and Outputs as geojson with the crs:84 SRS (for use on GitHub or elsewhere) | |
#geojson conversion | |
function shp2geojson() { | |
ogr2ogr -f GeoJSON -t_srs crs:84 "$1.geojson" "$1.shp" | |
} |
#---------------- custom install Apache Spark -------------------# | |
# instructions to build Apache Spark from source w/ current Scala & using sbt (vs maven) | |
# download Apache Spark src from the appropriate mirror | |
# untar: | |
tar zxf spark-1.3.1.tgz |
package operator | |
object FunctionalPipeline { | |
class PipedObject[T] private[Functional] (value:T) | |
{ | |
def |>[R] (f : T => R) = f(this.value) | |
} | |
implicit def toPiped[T] (value:T) = new PipedObject[T](value) | |
} |