|
|
|
# RPC Server (Dispatcher)
|
|
|
|
|
|
|
|
## Version 0.4.2
|
|
|
|
|
|
|
|
* [Simple setup](#simple)
|
|
|
|
* [High Performance Grid setup](#grid)
|
|
|
|
* [Security considerations](#security)
|
|
|
|
* [Help output](#help)
|
|
|
|
|
|
|
|
### <a id="simple" href="#simple">Simple setup</a>
|
|
|
|
|
|
|
|
Starting a Dispatcher can be as simple as running: `arachni_rpcd`
|
|
|
|
|
|
|
|
This will bind on `localhost:7331` by default.
|
|
|
|
|
|
|
|
### <a id="grid" href="#grid">High Performance Grid setup</a>
|
|
|
|
|
|
|
|
In order to connect the Dispatchers into a grid you'll need to:
|
|
|
|
|
|
|
|
* specify an IP address or hostname on which the Dispatcher will be accessible
|
|
|
|
by the rest of the Grid nodes (i.e. other Dispatchers)
|
|
|
|
* specify a neighbouring Dispatcher when running a new one
|
|
|
|
* use different Pipe IDs -- these are used to identify independent bandwidth
|
|
|
|
lines to the target in order to split the workload in a way that will
|
|
|
|
aggregate the collective bandwidth
|
|
|
|
|
|
|
|
After that they will build their network themselves.
|
|
|
|
|
|
|
|
Here's how it's done...
|
|
|
|
|
|
|
|
**Firing up the first one:**
|
|
|
|
|
|
|
|
arachni_rpcd --pipe-id="Pipe 1" --nickname="My Dispatcher" --address=192.168.0.1
|
|
|
|
|
|
|
|
**Adding more to make a Grid:**
|
|
|
|
|
|
|
|
arachni_rpcd --pipe-id="Pipe 2" --nickname="My second Dispatcher" --address=192.168.0.2 --neighbour=192.168.0.1:7331
|
|
|
|
|
|
|
|
**Lather, rinse, repeat:**
|
|
|
|
|
|
|
|
arachni_rpcd --pipe-id="Pipe 3" --nickname="My third Dispatcher" --address=192.168.0.3 --neighbour=192.168.0.2:7331
|
|
|
|
|
|
|
|
arachni_rpcd --pipe-id="Pipe 4" --nickname="My forth Dispatcher" --address=192.168.0.4 --neighbour=192.168.0.3:7331
|
|
|
|
|
|
|
|
That sort of setup assumes that each Dispatcher is on a machine with independent
|
|
|
|
bandwidth lines (to the target website at least).
|
|
|
|
|
|
|
|
If you want to, out of curiosity, start a few Dispatchers on localhost you will
|
|
|
|
need to specify the ports:
|
|
|
|
|
|
|
|
arachni_rpcd --pipe-id="Pipe 1" --nickname="My Dispatcher"
|
|
|
|
|
|
|
|
arachni_rpcd --pipe-id="Pipe 2" --nickname="My second Dispatcher" --port=1111 --neighbour=localhost:7331
|
|
|
|
|
|
|
|
arachni_rpcd --pipe-id="Pipe 3" --nickname="My third Dispatcher" --port=2222 --neighbour=localhost:1111
|
|
|
|
|
|
|
|
etc.
|
|
|
|
|
|
|
|
### <a id="security" href="#security">Security considerations</a>
|
|
|
|
|
|
|
|
**This is very important, you should read it thoroughly.**
|
|
|
|
|
|
|
|
By default, all connections are performed over encrypted sockets using SSL.
|
|
|
|
This takes care of encryption but not authN/authZ which is a very important issue
|
|
|
|
when it comes to the Dispatcher.
|
|
|
|
|
|
|
|
The Dispatcher is a dispatch server, and as such, its focus is on maintaining a
|
|
|
|
pool of running servers, ready to be used at a moment's notice.
|
|
|
|
A major part of maintaining that pool is replenishing it once a dispatch call has
|
|
|
|
been performed -- i.e. when a server if pop'ed from the pool, another one must be pushed.
|
|
|
|
|
|
|
|
When you take into account that server shutdown is delegated to the client the
|
|
|
|
security issue becomes crystal clear.
|
|
|
|
Clients can easily [fork-bomb](http://en.wikipedia.org/wiki/Fork_bomb) the
|
|
|
|
machine on which a Dispatcher is running.
|
|
|
|
|
|
|
|
This makes it crucial to provide Dispatcher access to trusted clients *only*.
|
|
|
|
Sufficient authN/authZ can be achieved by either:
|
|
|
|
|
|
|
|
* Configuring the relevant SSL options for both the clients and dispatch servers.
|
|
|
|
* Rolling your own network security scheme via a VPN or something similar.
|
|
|
|
|
|
|
|
### <a id="help" href="#help">Help output</a>
|
|
|
|
|
|
|
|
|
|
|
|
The help output of the RPC server if fairly straightforward:
|
|
|
|
|
|
|
|
```
|
|
|
|
Arachni - Web Application Security Scanner Framework v1.0dev
|
|
|
|
Author: Tasos "Zapotek" Laskos <tasos.laskos@gmail.com>
|
|
|
|
|
|
|
|
(With the support of the community and the Arachni Team.)
|
|
|
|
|
|
|
|
Website: http://arachni-scanner.com
|
|
|
|
Documentation: http://arachni-scanner.com/wiki
|
|
|
|
|
|
|
|
|
|
|
|
Usage: arachni_rpcd [options]
|
|
|
|
|
|
|
|
Supported options:
|
|
|
|
|
|
|
|
-h
|
|
|
|
--help output this
|
|
|
|
|
|
|
|
--address=<host> specify address to bind to
|
|
|
|
(Default: localhost)
|
|
|
|
|
|
|
|
--port=<num> specify port to listen to
|
|
|
|
(Default: 7331)
|
|
|
|
|
|
|
|
--port-range=<beginning>-<end>
|
|
|
|
|
|
|
|
specify port range for the RPC instances
|
|
|
|
(Make sure to allow for a few hundred ports.)
|
|
|
|
(Default: 1025-65535)
|
|
|
|
|
|
|
|
--reroute-to-logfile reroute all output to a logfile under 'logs/'
|
|
|
|
|
|
|
|
--pool-size=<num> how many server workers/processes should be available
|
|
|
|
at any given moment (Default: 5)
|
|
|
|
|
|
|
|
--neighbour=<URL> URL of a neighbouring Dispatcher (used to build a grid)
|
|
|
|
|
|
|
|
--weight=<float> weight of the Dispatcher
|
|
|
|
|
|
|
|
--pipe-id=<string> bandwidth pipe identification
|
|
|
|
|
|
|
|
--nickname=<string> nickname of the Dispatcher
|
|
|
|
|
|
|
|
--debug
|
|
|
|
|
|
|
|
|
|
|
|
SSL --------------------------
|
|
|
|
|
|
|
|
(All SSL options will be honored by the dispatched RPC instances as well.)
|
|
|
|
(Do *not* use encrypted keys!)
|
|
|
|
|
|
|
|
--ssl-pkey <file> location of the server SSL private key (.pem)
|
|
|
|
(Used to verify the server to the clients.)
|
|
|
|
|
|
|
|
--ssl-cert <file> location of the server SSL certificate (.pem)
|
|
|
|
(Used to verify the server to the clients.)
|
|
|
|
|
|
|
|
--node-ssl-pkey <file> location of the client SSL private key (.pem)
|
|
|
|
(Used to verify this node to other servers.)
|
|
|
|
|
|
|
|
--node-ssl-cert <file> location of the client SSL certificate (.pem)
|
|
|
|
(Used to verify this node to other servers.)
|
|
|
|
|
|
|
|
--ssl-ca <file> location of the CA certificate (.pem)
|
|
|
|
```
|
|
|
|
|
|
|
|
If you are interested in providing webappsec scanning services you can write
|
|
|
|
your own client using the [[RPC API | RPC API]]. |