pulse2percept.topography.neuropythy

Classes

NeuropythyMap(subject[, cache_dir])
class pulse2percept.topography.neuropythy.NeuropythyMap(subject, cache_dir=None, **params)[source]
build(**build_params)[source]

Build the model

Every model must have a `build method, which is meant to perform all expensive one-time calculations. You must call build before calling predict_percept.

Important

Don’t override this method if you are building your own model. Customize _build instead.

Parameters:build_params (additional parameters to set) – You can overwrite parameters that are listed in get_default_params. Trying to add new class attributes outside of that will cause a FreezeError. Example: model.build(param1=val)
cortex_to_dva(xc, yc, zc)[source]

Gives the visual field position(s) of the cortex point(s) (xc,yc,zc).

Parameters:yc, zc (xc,) – The x, y, and z-coordinate(s) of the cortex point(s) to look up (in mm).
Returns:x, y (array_like) – The x and y-coordinate(s) of the visual field point(s) (in dva).
dva_to_cortex(x, y, region='v1', hemi=None, surface='midgray')[source]

Gives the cortex position(s) of the visual field point(s) (x,y).

Parameters:
  • y (x,) – The x and y-coordinate(s) of the visual field point(s) to look up (in dva).
  • region (str) – The visual field map to look up the point(s) in. Valid options are ‘v1’, ‘v2’, and ‘v3’. Default is ‘v1’.
  • hemi (str) – The hemisphere to look up the point(s) in. Valid options are ‘lh’ and ‘rh’.
  • surface (str) – The surface to look up the point(s) on. Default is ‘midgray’. Other common options include ‘pial’ and ‘white’.
Returns:

  • cortex_pts (array_like) – cortical addresses of the visual field points (cortical addresses provide the face containing a point and the barycentric coordinates of the point within that face).
  • Adapted from code courtesy of Noah Benson

dva_to_v1(x, y, surface='midgray')[source]

Gives the 3D cortex position(s) of the visual field point(s) (x,y) in v1.

Parameters:
  • y (x,) – The x and y-coordinate(s) of the visual field point(s) to look up (in dva).
  • surface (str) – The surface to look up the point(s) on. Default is ‘midgray’. Other common options include ‘pial’ and ‘white’.
dva_to_v2(x, y, surface='midgray')[source]

Gives the 3D cortex position(s) of the visual field point(s) (x,y) in v2.

Parameters:
  • y (x,) – The x and y-coordinate(s) of the visual field point(s) to look up (in dva).
  • surface (str) – The surface to look up the point(s) on. Default is ‘midgray’. Other common options include ‘pial’ and ‘white’.
dva_to_v3(x, y, surface='midgray')[source]

Gives the 3D cortex position(s) of the visual field point(s) (x,y) in v3.

Parameters:
  • y (x,) – The x and y-coordinate(s) of the visual field point(s) to look up (in dva).
  • surface (str) – The surface to look up the point(s) on. Default is ‘midgray’. Other common options include ‘pial’ and ‘white’.
from_dva()[source]

Returns a dict containing the region(s) that this visuotopy maps to, and the corresponding mapping function(s).

get_default_params()[source]

Required to inherit from BaseModel

is_built

A flag indicating whether the model has been built

load_meshes(subject)[source]

Predicts retinotopy and loads submeshes for the given subject. Adapted from code courtesy of Noah Benson

set_params(**params)[source]

Set the parameters of this model

to_dva()[source]

Returns a dict containing the region(s) that this visuotopy maps from, and the corresponding inverse mapping function(s). This transform is optional for most models.

v1_to_dva(xv1, yv1, zv1)[source]

Convert points in v1 to dva. Uses the mean of the 5 nearest neighbors in the cortical mesh, weighted by 1/distance, to interpolate dva. Any points that are more than self.cort_nn_thresh um from the nearest neighbor will be set to nan.

Parameters:yv1, zv1 (xv1,) – The x, y, and z-coordinate(s) of the v1 point(s) to look up (in mm).
Returns:x, y (array_like) – The x and y-coordinate(s) of the visual field point(s) (in dva).
v2_to_dva(xv2, yv2, zv2)[source]

Convert points in v2 to dva. Uses the mean of the 5 nearest neighbors in the cortical mesh, weighted by 1/distance, to interpolate dva. Any points that are more than self.cort_nn_thresh um from the nearest neighbor will be set to nan.

Parameters:yv2, zv2 (xv2,) – The x, y, and z-coordinate(s) of the v2 point(s) to look up (in mm).
Returns:x, y (array_like) – The x and y-coordinate(s) of the visual field point(s) (in dva).
v3_to_dva(xv3, yv3, zv3)[source]

Convert points in v3 to dva. Uses the mean of the 5 nearest neighbors in the cortical mesh, weighted by 1/distance, to interpolate dva. Any points that are more than self.cort_nn_thresh um from the nearest neighbor will be set to nan.

Parameters:yv3, zv3 (xv3,) – The x, y, and z-coordinate(s) of the v3 point(s) to look up (in mm).
Returns:x, y (array_like) – The x and y-coordinate(s) of the visual field point(s) (in dva).