Solution: Include a new section on configuring both ZMQ and the host
OS tx/rx buffers to facilitate sending large messages at a high data
rate with the PGM protocol.
VMCI transport allows fast communication between the Host
and a virtual machine, between virtual machines on the same host,
and within a virtual machine (like IPC).
It requires VMware to be installed on the host and Guest Additions
to be installed on a guest.
contributors and doesn't reflect the real process. I've taken out all named
authors and referred to the contribution policy. Hopefully this will improve
the contributions to the man pages.
The PGM transport supports the omission of the network interface to
select the default one like:
announce.connect("epgm://eth0;239.255.128.46:64646"); // Use eth0
announce.connect("epgm://239.255.128.46:64646"); // Use the default
Also, mention C++ in the additional community bindings of 0MQ in
zmq.txt.
Multicast loopback is not a real multicast, rather a kernel-space
simulation. Moreover, it tends to be rather unreliable and lossy.
Removing the option will force users to use transports better
suited for the job, such as inproc or ipc.
Signed-off-by: Martin Sustrik <sustrik@250bpm.com>
- Clarify ZMQ_LINGER, zmq_close (), zmq_term () relationship
- New socket options
- Clarify thread safety of sockets and migration between threads
- Other minor and spelling fixes
Signed-off-by: Martin Lucina <mato@kotelna.sk>
- fixed unwrapped text in new man pages
- fixed over-long lines in older pages, where possible
- removed reference to old standalong devices from index page
- added refernce to new zmq_device[3] documented from index page
- some minor spelling corrections
* Added documentation for zmq_deviced, which we're developing
* Created consistent page footer in documentation template
* Page footer notes doc authors and copyright statement