Commit 0865bd8e authored by Simon Brass's avatar Simon Brass
Browse files

Add basic note documentation (pre-article).

parent f41618da
#+TITLE: Request-based one-sided server-client communication
#+AUTHOR: Simon Braß <>
* Introduction
- Introduce the necessity of a complex warmup-step for the event generation with respect to VBS at pp and NLO requirements (increasing computational time)
- solution is application of adaptive methodes to enhance efficieny of event generation
The VEGAS amplified (VAMP) Monte Carlo integration algorithm, a combination of the multi-channel ansatz and the VEGAS algorithm, has been successfully parallelized with help of the message-passing interface (MPI).
However, the foundation for parallel communication and computation have been laid, but not realised due to several facts.
(1) The algorithm dictates the setup of the integration grid, which requires serial time for all the worker.
This part of the organization time cannot be done concurrently with the computation as all the information from the setup is required during computation.
(2) The channel results, e.g. the adaptation information, can, however, be commnuicated during communication to the master.
(3) The refinement of the integration grids and channel weights can be parallelized, but will not due to preservation of the numerical properties of the warm-up and event generation (refer to the appendix and explain the LOCAL variants of some MPI routines.)
General-purpose monte carlo -> we have to cover all possible scenarios (the user does not know about the intrinsic of the applied algorithms and may (or in most cases will) choose a bad parameter set, which leads to a horrific bad speedup.
We switch between VEGAS-only (parallelization over a single grid → grid stratas are scatter among a group workers) and multi-channel, where each channel is computation by a single worker.
- Status quo: Aufbereiten der Berechnung → serial time, Berechnung (parallel), Sammeln + Adaption der Gitter → Serial
- Aufteilung von Berechnung und Sammeln der Ergebnisse, gleichmässig
* Tools
- Server / Client , master communication only
- local/global Request Register (gegenseitig) -> pre-knowledge about zuweisung des workloads
- Fair-Share mechanism hidden in MPI WAITSOME
* Implementation
- Berechnung und Kommunikation parallel → non-blocking (direkt rückkher zur calling function)
- Aufteilung der Computational workload anhand von Kanälen → Abschätzung der Rechenzeit durch einen Load-Balancer
- Kommunikation direkt nach Berechnung eines Kanals
- Push-Request: Client pusht Notification zum Server, welcher über einen Fair-Share Mechanismus (MPI WAItSOME), eine bestimmte Anzahl an Requests sammelt und diese dann abarbeitet
- Request-Handling: gleichende Requests werden vorab bei Server und Klienten registriert (Klient nur sich → local, Server alle → global)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment