Thu Nov 21 23:42:16 2024
EVENTS
 FREE
SOFTWARE
INSTITUTE

POLITICS
JOBS
MEMBERS'
CORNER

MAILING
LIST

NYLXS Mailing Lists and Archives
NYLXS Members have a lot to say and share but we don't keep many secrets. Join the Hangout Mailing List and say your peice.

DATE 2016-11-01

LEARN

2024-11-21 | 2024-10-21 | 2024-09-21 | 2024-08-21 | 2024-07-21 | 2024-06-21 | 2024-05-21 | 2024-04-21 | 2024-03-21 | 2024-02-21 | 2024-01-21 | 2023-12-21 | 2023-11-21 | 2023-10-21 | 2023-09-21 | 2023-08-21 | 2023-07-21 | 2023-06-21 | 2023-05-21 | 2023-04-21 | 2023-03-21 | 2023-02-21 | 2023-01-21 | 2022-12-21 | 2022-11-21 | 2022-10-21 | 2022-09-21 | 2022-08-21 | 2022-07-21 | 2022-06-21 | 2022-05-21 | 2022-04-21 | 2022-03-21 | 2022-02-21 | 2022-01-21 | 2021-12-21 | 2021-11-21 | 2021-10-21 | 2021-09-21 | 2021-08-21 | 2021-07-21 | 2021-06-21 | 2021-05-21 | 2021-04-21 | 2021-03-21 | 2021-02-21 | 2021-01-21 | 2020-12-21 | 2020-11-21 | 2020-10-21 | 2020-09-21 | 2020-08-21 | 2020-07-21 | 2020-06-21 | 2020-05-21 | 2020-04-21 | 2020-03-21 | 2020-02-21 | 2020-01-21 | 2019-12-21 | 2019-11-21 | 2019-10-21 | 2019-09-21 | 2019-08-21 | 2019-07-21 | 2019-06-21 | 2019-05-21 | 2019-04-21 | 2019-03-21 | 2019-02-21 | 2019-01-21 | 2018-12-21 | 2018-11-21 | 2018-10-21 | 2018-09-21 | 2018-08-21 | 2018-07-21 | 2018-06-21 | 2018-05-21 | 2018-04-21 | 2018-03-21 | 2018-02-21 | 2018-01-21 | 2017-12-21 | 2017-11-21 | 2017-10-21 | 2017-09-21 | 2017-08-21 | 2017-07-21 | 2017-06-21 | 2017-05-21 | 2017-04-21 | 2017-03-21 | 2017-02-21 | 2017-01-21 | 2016-12-21 | 2016-11-21 | 2016-10-21 | 2016-09-21 | 2016-08-21 | 2016-07-21 | 2016-06-21 | 2016-05-21 | 2016-04-21 | 2016-03-21 | 2016-02-21 | 2016-01-21 | 2015-12-21 | 2015-11-21 | 2015-10-21 | 2015-09-21 | 2015-08-21 | 2015-07-21 | 2015-06-21 | 2015-05-21 | 2015-04-21 | 2015-03-21 | 2015-02-21 | 2015-01-21 | 2014-12-21 | 2014-11-21 | 2014-10-21

Key: Value:

Key: Value:

MESSAGE
DATE 2016-11-01
FROM Ruben Safir
SUBJECT Re: [Learn] how is it indexing in cuda
From learn-bounces-at-nylxs.com Tue Nov 1 23:08:37 2016
Return-Path:
X-Original-To: archive-at-mrbrklyn.com
Delivered-To: archive-at-mrbrklyn.com
Received: from www.mrbrklyn.com (www.mrbrklyn.com [96.57.23.82])
by mrbrklyn.com (Postfix) with ESMTP id DFD08161312;
Tue, 1 Nov 2016 23:08:36 -0400 (EDT)
X-Original-To: learn-at-nylxs.com
Delivered-To: learn-at-nylxs.com
Received: from [10.0.0.62] (flatbush.mrbrklyn.com [10.0.0.62])
by mrbrklyn.com (Postfix) with ESMTP id 84848160E77;
Tue, 1 Nov 2016 23:08:33 -0400 (EDT)
To: Samir Iabbassen , learn-at-nylxs.com
References: <69bd63a0-9c7f-dbd9-9846-fe71ada4dde6-at-mrbrklyn.com>
<1478051308634.56730-at-liu.edu>
From: Ruben Safir
Message-ID: <7bc48c3a-841d-36b0-c243-9fc6866b867e-at-mrbrklyn.com>
Date: Tue, 1 Nov 2016 23:08:33 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101
Thunderbird/45.4.0
MIME-Version: 1.0
In-Reply-To: <1478051308634.56730-at-liu.edu>
Subject: Re: [Learn] how is it indexing in cuda
X-BeenThere: learn-at-nylxs.com
X-Mailman-Version: 2.1.17
Precedence: list
List-Id:
List-Unsubscribe: ,

List-Archive:
List-Post:
List-Help:
List-Subscribe: ,

Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Errors-To: learn-bounces-at-nylxs.com
Sender: "Learn"

On 11/01/2016 09:48 PM, Samir Iabbassen wrote:
> Take a look at this document if you have time.I can explain why and how we use indexing in CUDA tomorrow.
> http://users.wfu.edu/choss/CUDA/docs/Lecture%205.pdf
>
> You may take a look at other related lectures for CUDA programming on http://users.wfu.edu/choss/CUDA/docs/
>
>
>

I've been looking a the cuda docs, but they added to confusion, not relived it.
The first two arguments in kernel instantiation

<<< B1 , T1 >>>
those are dim3's (x y and z) but the explanation on how to access them is not
comprehendible, at least to me.


http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#execution-configuration

B.19. Execution Configuration

Any call to a __global__ function must specify the execution configuration for that call. The execution configuration defines the dimension of the grid and blocks that will be used to execute the function on the device, as well as the associated stream (see CUDA C Runtime for a description of streams).

The execution configuration is specified by inserting an expression of the form <<< Dg, Db, Ns, S >>> between the function name and the parenthesized argument list, where:

Dg is of type dim3 (see dim3) and specifies the dimension and size of the grid, such that Dg.x * Dg.y * Dg.z equals the number of blocks being launched;
Db is of type dim3 (see dim3) and specifies the dimension and size of each block, such that Db.x * Db.y * Db.z equals the number of threads per block;
Ns is of type size_t and specifies the number of bytes in shared memory that is dynamically allocated per block for this call in addition to the statically allocated memory; this dynamically allocated memory is used by any of the variables declared as an external array as mentioned in __shared__; Ns is an optional argument which defaults to 0;
S is of type cudaStream_t and specifies the associated stream; S is an optional argument which defaults to 0.

Yikes!


http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#kernels

2.1. Kernels

CUDA C extends C by allowing the programmer to define C functions, called kernels, that, when called, are executed N times in parallel by N different CUDA threads, as opposed to only once like regular C functions.

A kernel is defined using the __global__ declaration specifier and the number of CUDA threads that execute that kernel for a given kernel call is specified using a new <<<...>>>execution configuration syntax (see C Language Extensions). Each thread that executes the kernel is given a unique thread ID that is accessible within the kernel through the built-in threadIdx variable.

As an illustration, the following sample code adds two vectors A and B of size N and stores the result into vector C:

// Kernel definition
__global__ void VecAdd(float* A, float* B, float* C)
{
int i = threadIdx.x;
C[i] = A[i] + B[i];
}

int main()
{
...
// Kernel invocation with N threads
VecAdd<<<1, N>>>(A, B, C);
...
}

Here, each of the N threads that execute VecAdd() performs one pair-wise addition.
2.2. Thread Hierarchy

For convenience, threadIdx is a 3-component vector, so that threads can be identified using a one-dimensional, two-dimensional, or three-dimensional thread index, forming a one-dimensional, two-dimensional, or three-dimensional block of threads, called a thread block. This provides a natural way to invoke computation across the elements in a domain such as a vector, matrix, or volume.

The index of a thread and its thread ID relate to each other in a straightforward way: For a one-dimensional block, they are the same; for a two-dimensional block of size (Dx, Dy),the thread ID of a thread of index (x, y) is (x + y Dx); for a three-dimensional block of size (Dx, Dy, Dz), the thread ID of a thread of index (x, y, z) is (x + y Dx + z Dx Dy).

As an example, the following code adds two matrices A and B of size NxN and stores the result into matrix C:

// Kernel definition
__global__ void MatAdd(float A[N][N], float B[N][N],
float C[N][N])
{
int i = threadIdx.x;
int j = threadIdx.y;
C[i][j] = A[i][j] + B[i][j];
}

int main()
{
...
// Kernel invocation with one block of N * N * 1 threads
int numBlocks = 1;
dim3 threadsPerBlock(N, N);
MatAdd<<>>(A, B, C);
...
}

There is a limit to the number of threads per block, since all threads of a block are expected to reside on the same processor core and must share the limited memory resources of that core. On current GPUs, a thread block may contain up to 1024 threads.

However, a kernel can be executed by multiple equally-shaped thread blocks, so that the total number of threads is equal to the number of threads per block times the number of blocks.

Blocks are organized into a one-dimensional, two-dimensional, or three-dimensional grid of thread blocks as illustrated by Figure 6. The number of thread blocks in a grid is usually dictated by the size of the data being processed or the number of processors in the system, which it can greatly exceed.
Figure 6. Grid of Thread Blocks

Grid of Thread Blocks.


The number of threads per block and the number of blocks per grid specified in the <<<...>>> syntax can be of type int or dim3. Two-dimensional blocks or grids can be specified as in the example above.

Each block within the grid can be identified by a one-dimensional, two-dimensional, or three-dimensional index accessible within the kernel through the built-in blockIdx variable. The dimension of the thread block is accessible within the kernel through the built-in blockDim variable.

Extending the previous MatAdd() example to handle multiple blocks, the code becomes as follows.

// Kernel definition
__global__ void MatAdd(float A[N][N], float B[N][N],
float C[N][N])
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
int j = blockIdx.y * blockDim.y + threadIdx.y;
if (i < N && j < N)
C[i][j] = A[i][j] + B[i][j];
}

int main()
{
...
// Kernel invocation
dim3 threadsPerBlock(16, 16);
dim3 numBlocks(N / threadsPerBlock.x, N / threadsPerBlock.y);
MatAdd<<>>(A, B, C);
...
}

A thread block size of 16x16 (256 threads), although arbitrary in this case, is a common choice. The grid is created with enough blocks to have one thread per matrix element as before. For simplicity, this example assumes that the number of threads per grid in each dimension is evenly divisible by the number of threads per block in that dimension, although that need not be the case.

Thread blocks are required to execute independently: It must be possible to execute them in any order, in parallel or in series. This independence requirement allows thread blocks to be scheduled in any order across any number of cores as illustrated by Figure 5, enabling programmers to write code that scales with the number of cores.

Threads within a block can cooperate by sharing data through some shared memory and by synchronizing their execution to coordinate memory accesses. More precisely, one can specify synchronization points in the kernel by calling the __syncthreads() intrinsic function; __syncthreads() acts as a barrier at which all threads in the block must wait before any is allowed to proceed. Shared Memory gives an example of using shared memory.

For efficient cooperation, the shared memory is expected to be a low-latency memory near each processor core (much like an L1 cache) and __syncthreads() is expected to be lightweight.

~~~~~~~~~~~~~~~~~~~~~~
- Great so I read ALL of this, being the studious programmer that I am, and the
video throws me completely for a loop. It says map a 2D object


//First create a mapping from the 2D block and grid locations
//to an absolute 2D location in the image, then use that to
//calculate a 1D offset

I have no background for this. Not the powerpoint slides have given me a better idea what they might me
because the example in the shared memory discussion maps shared memory arrays to the original image. (see slide 54)

But truly, that video which seemed to so important and I had to get an account for and has quizes, which is what I thought
you were bases your quiz off of that I missed last week, is total garbage. It does not, in of itself, provide enough
information to solve the examples that it presents without external resources, for which they seem to have none.

It is a TYPICAL UC production... especial schools like UC Davis. It is educational porn. It chews up your time with
a lot of emotional drama, but it does NOTHING for your intellect. It is a huge time waster that sells you so message
that they promote (nice to see NVIDEO CEO of research giving a useless shpeel in the middle of the lecture). But
it is useless to teach anything from.

I can't stand all the smiling faces in that video and likeable personas. These are not educators. They do not know how
to educate.

Ruben




>
> ________________________________________
> From: Ruben Safir
> Sent: Tuesday, November 1, 2016 8:16 PM
> To: Samir Iabbassen; learn-at-nylxs.com
> Subject: how is it indexing in cuda
>
> This slide says we have block and threads. If we are indexing on both
> then it says
>
> - With M threads/block a unique index for each thread is given by:
> int index = threadIdx.x + blockIdx.x * M;
>
>
> I don't understand this at all.
>
> this is slide 39
>
>
> I have two coordinates. Why am I adding M?
>
> --
> So many immigrant groups have swept through our town
> that Brooklyn, like Atlantis, reaches mythological
> proportions in the mind of the world - RI Safir 1998
> http://www.mrbrklyn.com
>
> DRM is THEFT - We are the STAKEHOLDERS - RI Safir 2002
> http://www.nylxs.com - Leadership Development in Free Software
> http://www2.mrbrklyn.com/resources - Unpublished Archive
> http://www.coinhangout.com - coins!
> http://www.brooklyn-living.com
>
> Being so tracked is for FARM ANIMALS and and extermination camps,
> but incompatible with living as a free human being. -RI Safir 2013
>
>


--
So many immigrant groups have swept through our town
that Brooklyn, like Atlantis, reaches mythological
proportions in the mind of the world - RI Safir 1998
http://www.mrbrklyn.com

DRM is THEFT - We are the STAKEHOLDERS - RI Safir 2002
http://www.nylxs.com - Leadership Development in Free Software
http://www2.mrbrklyn.com/resources - Unpublished Archive
http://www.coinhangout.com - coins!
http://www.brooklyn-living.com

Being so tracked is for FARM ANIMALS and and extermination camps,
but incompatible with living as a free human being. -RI Safir 2013
_______________________________________________
Learn mailing list
Learn-at-nylxs.com
http://lists.mrbrklyn.com/mailman/listinfo/learn

  1. 2016-11-01 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] how is it indexing in cuda
  2. 2016-11-01 Ruben Safir <ruben.safir-at-my.liu.edu> Re: [Learn] not adequately speced of explained
  3. 2016-11-01 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] how is it indexing in cuda
  4. 2016-11-01 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] not adequately speced of explained
  5. 2016-11-02 Christopher League <league-at-contrapunctus.net> Re: [Learn] Fitch Algorithm - C++
  6. 2016-11-02 Ruben Safir <mrbrklyn-at-panix.com> Re: [Learn] Fitch Algorithm - C++
  7. 2016-11-02 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] how is it indexing in cuda
  8. 2016-11-02 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fitch Algorithm - C++
  9. 2016-11-02 IEEE Computer Society <csconnection-at-computer.org> Subject: [Learn] Hear Google's John Martinis Take on Quantum Computing at
  10. 2016-11-02 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] opencl
  11. 2016-11-02 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] scheduled for tommorw
  12. 2016-11-02 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] threads tutorial
  13. 2016-11-03 Ruben Safir <mrbrklyn-at-panix.com> Re: [Learn] Fitch Algorithm - C++
  14. 2016-11-03 Ruben Safir <mrbrklyn-at-panix.com> Re: [Learn] Fitch Algorithm - C++
  15. 2016-11-03 Ruben Safir <mrbrklyn-at-panix.com> Re: [Learn] Fitch Algorithm - C++
  16. 2016-11-03 Christopher League <league-at-contrapunctus.net> Re: [Learn] huffman code
  17. 2016-11-03 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] huffman code
  18. 2016-11-03 Ruben Safir <ruben.safir-at-my.liu.edu> Re: [Learn] huffman code
  19. 2016-11-03 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fitch algorithm from the beginning
  20. 2016-11-03 From: <mrbrklyn-at-panix.com> Subject: [Learn] huffman code
  21. 2016-11-03 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Phenology meeting
  22. 2016-11-03 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] relevant hackathon
  23. 2016-11-03 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] relevant hackathon
  24. 2016-11-04 Ruben Safir <mrbrklyn-at-panix.com> Re: [Learn] huffman code
  25. 2016-11-04 Christopher League <league-at-contrapunctus.net> Subject: [Learn] Fitch/Sankoff
  26. 2016-11-05 Christopher League <league-at-contrapunctus.net> Re: [Learn] Fwd: templates within templates
  27. 2016-11-05 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: Re: const T vs T const
  28. 2016-11-05 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: Template Library files and Header linking troubles
  29. 2016-11-05 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: templates within templates
  30. 2016-11-06 Ruben Safir <mrbrklyn-at-panix.com> Re: [Learn] Fwd: templates within templates
  31. 2016-11-06 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] Fwd: templates within templates
  32. 2016-11-06 Ruben Safir <mrbrklyn-at-panix.com> Re: [Learn] Fwd: templates within templates
  33. 2016-11-06 Christopher League <league-at-contrapunctus.net> Re: [Learn] Fwd: templates within templates
  34. 2016-11-06 Ruben Safir <mrbrklyn-at-panix.com> Re: [Learn] Fwd: templates within templates
  35. 2016-11-06 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] Fwd: templates within templates
  36. 2016-11-06 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] Fwd: templates within templates
  37. 2016-11-06 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] GNU Parallel 20161022 ('Matthew') released [stable]
  38. 2016-11-07 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] templates and ostream for future reference
  39. 2016-11-08 Christopher League <league-at-contrapunctus.net> Re: [Learn] C++ signature ambiguity
  40. 2016-11-08 Ruben Safir <ruben.safir-at-my.liu.edu> Re: [Learn] C++ signature ambiguity
  41. 2016-11-08 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] C++ signature ambiguity
  42. 2016-11-08 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: Invitation: Phylogeny meeting -at- Weekly from 10:15 to
  43. 2016-11-08 Ruben Safir <mrbrklyn-at-panix.com> Subject: [Learn] Fwd: [nylug-talk] RSVP open: Wed Nov 16,
  44. 2016-11-09 Christopher League <league-at-contrapunctus.net> Re: [Learn] merge sort parallel hw
  45. 2016-11-09 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] merge sort parallel hw
  46. 2016-11-09 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] merge sort parallel hw
  47. 2016-11-09 Christopher League <league-at-contrapunctus.net> Re: [Learn] merge sort parallel hw
  48. 2016-11-09 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] mergesort tutorial
  49. 2016-11-09 Christopher League <league-at-contrapunctus.net> Re: [Learn] mergesort tutorial
  50. 2016-11-09 Christopher League <league-at-contrapunctus.net> Re: [Learn] namespace and external files confusion
  51. 2016-11-09 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] namespace and external files confusion
  52. 2016-11-09 From: "Carlos R. Mafra" <crmafra-at-gmail.com> Re: [Learn] Question about a small change
  53. 2016-11-09 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] =?utf-8?q?C++_call_of_overloaded_=E2=80=98track=28int*=26?=
  54. 2016-11-09 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: lost arguments
  55. 2016-11-09 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: [dinosaur] Dating origins of dinosaurs,
  56. 2016-11-09 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] merge sort parallel hw
  57. 2016-11-09 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] mergesort tutorial
  58. 2016-11-09 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] namespace and external files confusion
  59. 2016-11-10 Christopher League <league-at-contrapunctus.net> Re: [Learn] merge sort parallel hw
  60. 2016-11-10 Ruben Safir <mrbrklyn-at-panix.com> Re: [Learn] merge sort parallel hw
  61. 2016-11-10 Ruben Safir <mrbrklyn-at-panix.com> Re: [Learn] merge sort parallel hw
  62. 2016-11-10 Ruben Safir <ruben.safir-at-my.liu.edu> Re: [Learn] merge sort parallel hw
  63. 2016-11-10 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] [Hangout-NYLXS] mergesort tutorial
  64. 2016-11-10 Ruben Safir <mrbrklyn-at-panix.com> Subject: [Learn] Fwd: [Hangout-NYLXS] ease your mind- everything in the
  65. 2016-11-10 Ruben Safir <ruben.safir-at-my.liu.edu> Subject: [Learn] Fwd: [Hangout-NYLXS] R Programming Workshop
  66. 2016-11-10 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Paleocast phenogenetic tree building
  67. 2016-11-11 Christopher League <league-at-contrapunctus.net> Re: [Learn] merge sort parallel hw
  68. 2016-11-12 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] HW of mergesort in parallel
  69. 2016-11-13 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] merge sort in parallel assignment
  70. 2016-11-14 Christopher League <league-at-contrapunctus.net> Re: [Learn] merge sort in parallel assignment
  71. 2016-11-14 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] merge sort in parallel assignment
  72. 2016-11-14 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] merge sort parallel hw
  73. 2016-11-14 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] CUDA and video
  74. 2016-11-14 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] PNG Graphic formats and CRCs
  75. 2016-11-15 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: PNG coding
  76. 2016-11-15 ruben safir <ruben.safir-at-my.liu.edu> Subject: [Learn] Fwd: PNG Coding
  77. 2016-11-16 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] Fwd: lost arguments
  78. 2016-11-16 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] relevant hackathon
  79. 2016-11-16 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] C++ Workshop Announcement
  80. 2016-11-16 Ruben Safir <mrbrklyn-at-panix.com> Subject: [Learn] Fwd: Re: ref use
  81. 2016-11-16 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] ref use
  82. 2016-11-16 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] why use a reference wrapper int his example
  83. 2016-11-17 Ruben Safir <mrbrklyn-at-panix.com> Re: [Learn] [Hangout-NYLXS] at K&R now
  84. 2016-11-17 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: [Hangout-NYLXS] Fwd: PNG Coding
  85. 2016-11-18 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] C++ workshop and usenet responses
  86. 2016-11-19 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: ref use
  87. 2016-11-20 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] when is the constructor called for an object
  88. 2016-11-21 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: creating a binary tree
  89. 2016-11-21 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: hidden static
  90. 2016-11-21 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: ISBI 2017 Call for Abstracts and Non-Author
  91. 2016-11-21 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: PNG coding
  92. 2016-11-21 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: Re: the new {} syntax
  93. 2016-11-21 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: when is the constructor called for an object
  94. 2016-11-21 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: when is the constructor called for an object
  95. 2016-11-21 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: [dinosaur] Eoconfuciusornis feather keratin and
  96. 2016-11-21 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] look what I found
  97. 2016-11-22 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Cuccuency book
  98. 2016-11-22 ruben safir <ruben.safir-at-my.liu.edu> Subject: [Learn] declare a func or call an object
  99. 2016-11-22 Ruben Safir <ruben.safir-at-my.liu.edu> Subject: [Learn] Fwd: Re: Using CLIPS as a library
  100. 2016-11-23 Ruben Safir <ruben.safir-at-my.liu.edu> Subject: [Learn] Fwd: Simple C++11 Wrapper for CLIPS 6.30
  101. 2016-11-23 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Parrelel Programming HW2 with maxpath
  102. 2016-11-24 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] great research news for big data
  103. 2016-11-24 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] mapping algorithms
  104. 2016-11-24 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Todays meeting
  105. 2016-11-25 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: [dinosaur] Flightless theropod phylogenetic variation
  106. 2016-11-26 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] Note to self for Thursday
  107. 2016-11-26 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fitch etc
  108. 2016-11-26 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Note to self for Thursday
  109. 2016-11-26 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] operator<<() overloading details and friend
  110. 2016-11-27 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] 130 year old feathers analysis
  111. 2016-11-27 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: ACM/SPEC ICPE 2017 - Call for Tutorial Proposals
  112. 2016-11-27 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: ACM/SPEC ICPE 2017 - Call for Workshop Proposals
  113. 2016-11-27 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: CfP 22nd Conf. Reliable Software Technologies,
  114. 2016-11-27 ruben safir <ruben-at-mrbrklyn.com> Subject: [Learn] Fwd: Seeking contributors for psyche-c
  115. 2016-11-29 Christopher League <league-at-contrapunctus.net> Re: [Learn] Look at this exciting output by my test program
  116. 2016-11-29 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] Look at this exciting output by my test program
  117. 2016-11-29 Ruben Safir <ruben-at-mrbrklyn.com> Re: [Learn] Look at this exciting output by my test program
  118. 2016-11-29 Christopher League <league-at-contrapunctus.net> Re: [Learn] Quantum Entanglement
  119. 2016-11-29 Ruben Safir <mrbrklyn-at-panix.com> Re: [Learn] Quantum Entanglement
  120. 2016-11-29 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Here is the paper I was talking out
  121. 2016-11-29 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Look at this exciting output by my test program
  122. 2016-11-29 nylxs <mrbrklyn-at-optonline.net> Subject: [Learn] Look at this exciting output by my test program
  123. 2016-11-29 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] Quantum Entanglement
  124. 2016-11-29 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] The Death of PBS
  125. 2016-11-29 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] witmer lab ohio and 3d imaging
  126. 2016-11-30 Ruben Safir <ruben-at-mrbrklyn.com> Subject: [Learn] phylogenetic crawler

NYLXS are Do'ers and the first step of Doing is Joining! Join NYLXS and make a difference in your community today!