Skip to content

Instantly share code, notes, and snippets.

@catree
catree / make_vtk_camera.cpp
Created March 15, 2021 13:36 — forked from decrispell/make_vtk_camera.cpp
Convert standard camera intrinsic (focal length, principal point) and extrinsic parameters (rotation and translation) into a vtkCamera for rendering. Assume square pixels and 0 skew for now.
/**
* Convert standard camera intrinsic and extrinsic parameters to a vtkCamera instance for rendering
* Assume square pixels and 0 skew (for now).
*
* focal_len : camera focal length (units pixels)
* nx,ny : image dimensions in pixels
* principal_pt: camera principal point,
* i.e. the intersection of the principal ray with the image plane (units pixels)
* camera_rot, camera_trans : rotation, translation matrix mapping world points to camera coordinates
* depth_min, depth_max : needed to set the clipping range
@catree
catree / bench.py
Created November 18, 2019 01:47 — forked from Erotemic/bench.py
benchmark code
import ubelt as ub
import numpy as np
from PIL import Image
import six
import cv2
from clab.augment import augment_common
from clab.util import imutil
from clab import util
try:
import skimage

Real depth in OpenGL / GLSL

http://olivers.posterous.com/linear-depth-in-glsl-for-real

So, many places will give you clues how to get linear depth from the OpenGL depth buffer, or visualise it, or other things. This, however, is what I believe to be the definitive answer:

This link http://www.songho.ca/opengl/gl_projectionmatrix.html gives a good run-down of the projection matrix, and the link between eye-space Z (z_e below) and normalised device coordinates (NDC) Z (z_n below). From there, we have

A   = -(zFar + zNear) / (zFar - zNear);

B = -2zFarzNear / (zFar - zNear);

@catree
catree / show_opencv_image_in_opengl.cpp
Created September 2, 2019 15:54 — forked from insaneyilin/show_opencv_image_in_opengl.cpp
Show opencv cv::Mat image in an OpenGL window(use GLFW)
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <opencv2/opencv.hpp>
@catree
catree / points_on_sphere
Created August 27, 2019 13:50 — forked from dinob0t/points_on_sphere
Generates evenly distributed points on the surface of a sphere
"""
To generate 'num' points on a sphere of radius 'r' centred on the origin
- Random placement involves randomly chosen points for 'z' and 'phi'
- Regular placement involves chosing points such that there one point per d_area
References:
Deserno, 2004, How to generate equidistributed points on the surface of a sphere
http://www.cmu.edu/biolphys/deserno/pdf/sphere_equi.pdf
"""
@catree
catree / align_scan.py
Created May 9, 2019 08:24 — forked from smeschke/align_scan.py
Aligns a scanned document
import cv2, numpy as np, random, math
# Find contour edges
# Find the edge that is torn
# use the hough line transform
# create a mask image where the lines and white on a black background
# check if the point is in a white or black region
# Rotate the torn edges
# Measure how much they overlap
# The rotation with the maximum overlap will be how they should align
@catree
catree / test_blas.c
Created February 12, 2019 15:33 — forked from TNick/test_blas.c
Some routines to test blas and some code arround it
/*
* Author: Nicu Tofan
* License: BSD
*
* See below for getRealTime() license.
*/
#include <cblas.h>
@catree
catree / configure_cuda_p70.md
Created September 21, 2018 16:45 — forked from alexlee-gk/configure_cuda_p70.md
Use integrated graphics for display and NVIDIA GPU for CUDA on Ubuntu 14.04

This was tested on a ThinkPad P70 laptop with an Intel integrated graphics and an NVIDIA GPU:

lspci | egrep 'VGA|3D'
00:02.0 VGA compatible controller: Intel Corporation Device 191b (rev 06)
01:00.0 VGA compatible controller: NVIDIA Corporation GM204GLM [Quadro M3000M] (rev a1)

A reason to use the integrated graphics for display is if installing the NVIDIA drivers causes the display to stop working properly. In my case, Ubuntu would get stuck in a login loop after installing the NVIDIA drivers. This happened regardless if I installed the drivers from the "Additional Drivers" tab in "System Settings" or the ppa:graphics-drivers/ppa in the command-line.

# Use local scikit-image
import sys
sys.path.insert(0, "/home/dan/University/projects/gsoc_face_detection/scikit-image/")
from skimage.feature import multiblock_local_binary_pattern
from skimage.transform import integral_image
import numpy as np
import skimage.io as io
import xml.etree.ElementTree as ET
@catree
catree / avx_dispatch_example.c
Created November 14, 2017 18:41 — forked from zchothia/avx_dispatch_example.c
AVX CPU dispatching - based on Agner Fog's C++ vector class library [http://www.agner.org/optimize/vectorclass.zip]
// AVX CPU dispatching - based on Agner Fog's C++ vector class library:
// http://www.agner.org/optimize/vectorclass.zip
#include <stdio.h>
#include <stdbool.h>
//------------------------------------------------------------------------------
//>> BEGIN <instrset.h>
// Detect 64 bit mode