sgemv
.minifmm
.Nicolás Wolovick 20180517
Los 3 jinetes del apocalipsis y off-by-one errors.
De Using OpenMP, Capítulo 5.
nowait
.ordered
.master
vs. single
.numactl
.One may be worried about the creation of new threads within the inner loop. Worry not, the
libgomp
in GCC is smart enough to actually only creates the threads once. Once the team has done its work, the threads are returned into a "dock", waiting for new work to do. In other words, the number of times the clone system call is executed is exactly equal to the maximum number of concurrent threads. The parallel directive is not the same as a combination ofpthread_create
andpthread_join
.There will be lots of locking/unlocking due to the implied barriers, though. I don't know if that can be reasonably avoided or whether it even should.
Luego el ejemplo de Using OpenMP..., Fig 5.24 no es tan grave.
De todas maneras puede mejorar la localidad de los hilos y con eso el reuso de las caché locales.
parallel-parallel.c
1 #pragma omp parallel
2 {
3 printf("1st parallel, tid %d\n", omp_get_thread_num());
4 }
5
6 printf("In the middle, tid %d\n", omp_get_thread_num());
7
8 #pragma omp parallel
9 {
10 printf("2nd parallel, tid %d\n", omp_get_thread_num());
11 }
Vemos cuantos clone
llama:
1 $ gcc -fopenmp parallel-parallel.c && OMP_NUM_THREADS=4 strace ./a.out 2>&1 | grep clone
2 clone(child_stack=0x7f8af92f5f70, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f8af92f69d0, tls=0x7f8af92f6700, child_tidptr=0x7f8af92f69d0) = 30402
3 clone(child_stack=0x7f8af8af4f70, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f8af8af59d0, tls=0x7f8af8af5700, child_tidptr=0x7f8af8af59d0) = 30403
4 clone(child_stack=0x7f8af82f3f70, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f8af82f49d0, tls=0x7f8af82f4700, child_tidptr=0x7f8af82f49d0) = 30404
Efectivamente GOMP_parallel_start()
y GOMP_parallel_end()
dejan los hilos "anclados" para reutilizarlos.
Luego el costo está en las barreras y/o en la migración de hilos a otras CPUs.
Gráficos de: Reid, Bull, OpenMP Microbenchmarks Version 2.0, 2004. Para Sun Fire 15K.
(8 procesadores)
(8 procesadores)
(Arreglo de 729 elementos)
Joseph Harkness, Extending the EPCC OpenMP Microbenchmarks for OpenMP 3.0, University of Edinburgh, 2010.
sgemv
(Using OpenMP, p.162)
sgemv
paralelo 1 #include <stdio.h>
2 #include <omp.h>
3
4 #ifndef N
5 #define N 1024
6 #endif
7
8 float a[N][N], b[N], c[N];
9
10 int main(void)
11 {
12 unsigned int i = 0, j = 0;
13 double start = 0.0;
14
15 start = omp_get_wtime();
16 #pragma omp parallel for default(none) shared(start,a,b,c) private(i,j)
17 for (i=0; i<N; ++i)
18 for (j=0; j<N; ++j)
19 c[i] += a[i][j]*b[j];
20 printf("%f", ((long)N*N*3*sizeof(float))/((1<<30)*(omp_get_wtime()-start)));
21
22 return 0;
23 }
1 gcc-8 -O3 -fopenmp $PROG.c -o $PROG -DN="$n"
2 OMP_NUM_THREADS=$t taskset 0x0000000F numactl --interleave=all ./$PROG
mini
, performance1 * Intel Core i7-950@3.07GHz (4 cores, 8 hilos), 16GB DDR3 1066MHz.
mini
, eficienciaganesh
, performance4 * AMD Opteron 8212@2.0GHz (2 cores), 4*8GB.
ganesh
, eficienciazx81
, performance2 * Intel E5-2620 v3 (6 cores), 4 samples
zx81
, eficiencianabucodonosor
, performance2 * Intel E5-2680 v2 (10 cores), 4 samples
nabucodonosor
, eficienciaVimos varias cosas:
zx81
está lejos de ser perfecto.htop
)taskset
efectivamente pins threads-to-cores.omp
y omp-task-depend
, casi no tienen diferencias en walltime.This somewhat surprising result is of course specific to the algorithm, implementation, system, and software used and would not be possible if the code were not so amenable to compiler analysis. The lesson to be learned from this study is that, for important program regions, both experimentation and analysis are needed. We hope that the insights given here are sufficient for a programmer to get started on this process.
(Using OpenMP, p.190)
#pragma opm parallel for
.private
.{ ... }
.1 #pragma omp parallel
2 #pragma omp atomic
3 sum += a[omp_num_threads()];
4 ++a[omp_num_threads()];
Table of Contents | t |
---|---|
Exposé | ESC |
Full screen slides | e |
Presenter View | p |
Source Files | s |
Slide Numbers | n |
Toggle screen blanking | b |
Show/hide slide context | c |
Notes | 2 |
Help | h |