English  

 
產品介紹 > 中介軟體 > GPU平台轉換加速套件> AccelerEyes Jacket

GPU 平台轉換加速套件 -AccelerEyes Jacket
 最新版本說明
 
Jacket  v1.7  Now Available!
 
02/26/2011 - v1.7

AccelerEyes released version 1.7 of the Jacket GPU programming platform for MATLAB®. Version 1.7 delivers a new Sparse Linear Algebra library, a new Signal Processing Library, a big boost to convolution functions, and much more.

New features available with Jacket 1.7 include:

Convolutions enhanced: CONV, CONV2, CONVN

Sparse Linear Algebra:
SPARSE, TRANSPOSE, CTRANSPOSE, MTIMES,
BICGSTAB, BICGSTABL, BICG, LSQR,
CGS, PCG, GMRES, QMR,
TFQMR

Graphics Library Refresh:
SURF, PLOT, IMAGESC, SCATTER3,
GHOLD, GSUBPLOT, GCLF, GDRAWNOW,
GFIGURE, GCLOSE

Signal Processing Library:
DECONV, FREQS, FREQZ, DCT,
IDCT, HILBERT, XCORR, XCORR2,
UPSAMPLE, DOWNSAMPLE, WINDOW, HAMMING,
BLACKMAN, BLACKMANHARRIS, HANNING, HANN,
KAISER, KAISERORD, SQUARE
ONES, ZEROS, RAND, RANDN, INF, NAN, etc. now available via new usage of CLASS
New functions GINF and GNAN added
TIMEIT provides robust estimates of both CPU and GPU code snippets

GCOMPILE features:
BREAK, RETURN, CONTINUE,
additional EPS syntax: eps('single') and eps('double'),

Support for remote desktop


 

07/12/2010 - v1.4 (build 6121)

* Requires CUDA 3.1 drivers

 - Windows: 257.21 or higher
 - Linux: 256.35 or higher
 - Mac32

* Users are not required to install the CUDA toolkit. Jacket 1.4 was built with CUDA 3.1.
 In Linux and Mac, Jacket's CUDA libraries must be given precedence
 (via LD_LIBRARY_PATH) relative to existing CUDA toolkit installations.

Additions:
+ Added support for the NVIDIA Fermi architecture (GTX400 and Tesla C2000 series)
 - Jacket DLA support for Fermi
+ Dramatically improved the performance of Jacket's JIT(Just-In-Time)compilation technology
 - Operations involving random scalar constants do not incur a recompile
 - Removed dependencies on MINGW and NVCC
+ Logical indexing now supported for SUBSREF and SUBSASGN, e.g. B = A(A > x)
+ MTIMES supports mixed types, no longer uses CUBLAS, and achieves better performance
 than CUBLAS
+ SUM,MIN,MAX,ANY,ALL now supported over any number of columns, rows, or dimensions
+ MIN, MAX indexed output now supported for complex single and complex double inputs
+ SUM, MIN, MAX over columns is greatly accelerated; vectors accelerated too
+ FIND performance improvements
+ CONVN, BLKDIAG, DOT performance improvements
+ CUMSUM now supported for matrices also
+ SORT, CONVN now supported in double-precision
+ HESS(A) and [P,H] = HESS(A) now supported (see Jacket DLA)
+ LEGENDRE now supported
+ Expanded GFOR support for:
 - MLDIVIDE, INV, HESS, MTIMES
 - FFT, FFT2, FFTN and inverses IFFT, IFFT2, IFFTN
+ PCG now supported, this is a system solver that uses the Preconditioned Conjugate
 Gradient Method for dense matrices
+ Image Processing Library now available. Direct access to the NVIDIA Performance
 Primitives(NPP)enabling new image processing functionality such as ERODE and DILATE.

Changes:
+ Memory subsystem is now more stable and incurs less fragmentation resulting in fewer
 "Out of memory" errors
+ MLDIVIDE fixed behavior in case of singular inputs
+ FFT no longer gives incorrect values or CUFFT errors for certain sizes
+ HIST now uses the same binning method and therefore, similar results as MATLAB
+ SQRT on negative real numbers correctly outputs complex data
+ SORT on rows more consistent with MATLAB behavior
+ GRADIENT fixed erroneous behavior in some cases
+ INTERP1 no longer segfaults and is partially supported

Known issues:
+ FULL results in a GPU failure on certain systems
+ BSXFUN may perform slower in certain situations
+ FFT fails for sizes less than 32 elements on some single-precision cards

 
 

 


AccelerEyes Jacket

  AccelerEyes Jacket
  最新版本說明
  系列產品一覽
  Jacket for MATLAB
  Jacket HPC