Skip to content

Instantly share code, notes, and snippets.

@ebarsoum
Created October 26, 2019 00:24
Show Gist options
  • Save ebarsoum/46861cedbf0dcaee8639ffe36f1470ad to your computer and use it in GitHub Desktop.
Save ebarsoum/46861cedbf0dcaee8639ffe36f1470ad to your computer and use it in GitHub Desktop.
GPU memory overhead for PyCUDA
import numpy as np
from pynvml.smi import nvidia_smi
import pycuda.gpuarray as ga
import pycuda.driver as cuda
nvsmi = nvidia_smi.getInstance()
def getGPUMemoryUsage(gpu_index=0):
return nvsmi.DeviceQuery("memory.used")["gpu"][gpu_index]['fb_memory_usage']['used']
gpu_index = 0
print("Before: used GPU Memory: {} MB".format(getGPUMemoryUsage(gpu_index)))
cuda.init()
context = cuda.Device(gpu_index).make_context()
tensor_gpu = ga.to_gpu(np.array([[1, 2, 3], [4, 5, 6]]).astype(np.float32))
print("After: used GPU Memory: {} MB".format(getGPUMemoryUsage(gpu_index)))
context.pop()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment