|
|
h2. Version 0.4
|
|
|
h2. Version 0.4.1
|
|
|
|
|
|
There are a lot of options, customizations and tweaks you can use but fear not and don't let yourself be overwhelmed.
|
|
|
|
... | ... | @@ -75,13 +75,15 @@ h3. Command reference |
|
|
## "Debug mode ==(--debug)==":#debug
|
|
|
## "Only positives ==(--only-positives)==":#only-positives
|
|
|
## "HTTP request limit ==(--http-req-limit)==":#http-req-limit
|
|
|
## "HTTP harvest last ==(--http-harvest-last)==":#http-harvest-last
|
|
|
## "Cookie jar ==(--cookie-jar)==":#cookie-jar
|
|
|
## "Cookie string ==(--cookie-string)==":#cookie-string
|
|
|
## "User agent ==(--user-agent)==":#user-agent
|
|
|
## "Custom header ==(--custom-header)==":#custom-header
|
|
|
##* "Example":#custom-header_example
|
|
|
## "Authorized by ==(--authed-by)==":#authed-by
|
|
|
##* "Example":#authed-by_example
|
|
|
## "Login check URL ==(--login-check-url)==":#login-check-url
|
|
|
## "Login check pattern ==(--login-check-pattern)==":#login-check-pattern
|
|
|
# "Profiles":#profiles
|
|
|
## "Save profile ==(--save-profile)==":#save-profile
|
|
|
##* "Example":#save-profile_example
|
... | ... | @@ -94,6 +96,7 @@ h3. Command reference |
|
|
##* "Example":#exclude_example
|
|
|
## "Include ==(--include/-i)==":#include
|
|
|
## "Redundant ==(--redundant)==":#redundant
|
|
|
## "Audo-redundant ==(--auto-redundant)==":#auto-redundant
|
|
|
## "Follow subdomains ==(-f/--follow-subdomains)==":#follow-subdomains
|
|
|
## "Obey robots.txt file ==(--obey-robots-txt)==":#obey-robots-txt
|
|
|
## "Depth limit ==(--depth)==":#depth
|
... | ... | @@ -107,6 +110,10 @@ h3. Command reference |
|
|
## "Audit cookies ==(--audit-cookies/-c)==":#audit-cookies
|
|
|
## "Exclude cookie ==(--exclude-cookie)==":#exclude-cookie
|
|
|
## "Audit headers ==(--audit-headers)==":#audit-headers
|
|
|
# "Coverage":#coverage
|
|
|
## "Audit cookies extensively ==(--audit-cookies-extensively)==":#audit-cookies-extensively
|
|
|
## "Fuzz methods ==(--fuzz-methods)==":#fuzz-methods
|
|
|
## "Exclude binaries ==(--exclude-binaries)==":#exclude-binaries
|
|
|
# "Modules":#modules
|
|
|
## "List modules ==(--lsmod)==":#lsmod
|
|
|
##* "Example":#lsmod_example
|
... | ... | @@ -331,7 +338,7 @@ h3(#only-positives). "Only positives ==(--only-positives)==":#only-positives |
|
|
*Default*: disabled
|
|
|
*Multiple invocations?*: no
|
|
|
|
|
|
This will supress all messages except for positive matches -- vulnerabilities.
|
|
|
This will suppress all messages except for positive matches -- vulnerabilities.
|
|
|
|
|
|
h3(#http-req-limit). "HTTP request limit ==(--http-req-limit)==":#http-req-limit
|
|
|
|
... | ... | @@ -345,20 +352,6 @@ Limit how many concurrent HTTP request are sent. |
|
|
*Warning*: Given enough bandwidth and a high limit it could cause a DoS.
|
|
|
Be careful when setting this option too high, don't kill your server.
|
|
|
|
|
|
h3(#http-harvest-last). "HTTP harvest last ==(--http-harvest-last)==":#http-harvest-last
|
|
|
|
|
|
*Expects*: <n/a>
|
|
|
*Default*: disabled (responses wil be harvested for each page)
|
|
|
*Multiple invocations?*: no
|
|
|
|
|
|
Tells Arachni to build up the HTTP request queue with audit queries during the crawl for the *whole* site.
|
|
|
Once the crawl finishes the HTTP requests will be sent and the responses will be harvested.
|
|
|
|
|
|
*Note*: If you are scanning a high-end server and you are using a powerful machine with enough bandwidth *and* you feel dangerous, you can use this flag with an increased _==--http-req-limit==_
|
|
|
to get maximum performance out of your scan.
|
|
|
*Warning*: When scanning large websites with hundreds of pages this could eat up all your memory pretty quickly or even cause Arachni to choke up and stop responding.
|
|
|
Experiment with these options to find what suits you best or -- to be on the safe side -- use the defaults.
|
|
|
|
|
|
h3(#cookie-jar). "Cookie jar ==(--cookie-jar)==":#cookie-jar
|
|
|
|
|
|
*Expects*: cookiejar file
|
... | ... | @@ -374,6 +367,18 @@ You should also take a look at the _==--exclude-cookie==_ option discussed later |
|
|
|
|
|
*Note*: If you don't feel comfortable setting your own cookie-jar you can use the Proxy or AutoLogin plugin to login to the web application.
|
|
|
|
|
|
h3(#cookie-string). "Cookie string ==(--cookie-string)==":#cookie-string
|
|
|
|
|
|
*Expects*: string
|
|
|
*Default*: disabled
|
|
|
*Multiple invocations?*: no
|
|
|
|
|
|
Cookies, as a string, to be sent to the web application.
|
|
|
|
|
|
h4(#cookie-string_example). "Example":#cookie-string_example
|
|
|
|
|
|
```--cookie-string='userid=19;sessionid=deadbeefbabe'```
|
|
|
|
|
|
h3(#user-agent). "User agent ==(--user-agent)==":#user-agent
|
|
|
|
|
|
*Expects*: string
|
... | ... | @@ -410,6 +415,31 @@ h4(#authed-by_example). "Example":#authed-by_example |
|
|
|
|
|
<pre><code> --authed-by='John Doe <jdoe@test.com>'</code></pre>
|
|
|
|
|
|
h3(#login-check-url). "Login check URL ==(--login-check-url)==":#login-check-url
|
|
|
|
|
|
*Expects*: string
|
|
|
*Default*: disabled
|
|
|
*Multiple invocations?*: no
|
|
|
*Requires*: "login-check-pattern":#login-check-pattern
|
|
|
|
|
|
The URL passed to this option will be used to verify that the scanner is still
|
|
|
logged in to the web application.
|
|
|
|
|
|
If HTTP response body of URL matches the "login-check-pattern":#login-check-pattern
|
|
|
this should indicate that the scanner is logged in.
|
|
|
|
|
|
h3(#login-check-pattern). "Login check pattern ==(--login-check-pattern)==":#login-check-pattern
|
|
|
|
|
|
*Expects*: string
|
|
|
*Default*: disabled
|
|
|
*Multiple invocations?*: no
|
|
|
*Requires*: "login-check-url":#login-check-url
|
|
|
|
|
|
A pattern used against the body of the "login-check-url":#login-check-url to
|
|
|
verify that the scanner is still logged in to the web application.
|
|
|
|
|
|
A positive match should indicate that the scanner is logged in.
|
|
|
|
|
|
h2(#profiles). "Profiles":#profiles
|
|
|
|
|
|
h3(#save-profile). "Save profile ==(--save-profile)==":#save-profile
|
... | ... | @@ -550,21 +580,47 @@ This will cause URLs that contain "calendar.php" to be crawled only 3 times. |
|
|
|
|
|
This option is useful when auditing a website that has a lot of redundant pages like a photo gallery or a dynamically generated calendar.
|
|
|
|
|
|
h3(#follow-subdomains). "Follow subdomains ==(-f/--follow-subdomains)==":#follow-subdomains
|
|
|
h3(#auto-redundant). "Auto-redundant ==(--auto-redundant)==":#auto-redundant
|
|
|
|
|
|
*Expects*: <n/a>
|
|
|
*Default*: disabled
|
|
|
*Expects*: integer
|
|
|
*Default*: disabled (with a value of 10 if none has been specified)
|
|
|
*Multiple invocations?*: no
|
|
|
|
|
|
This flag will cause Arachni to follow links to subdomains.
|
|
|
The auto-redundant option sets the limit of how many URLs with identical parameters
|
|
|
should be followed.
|
|
|
|
|
|
This can prevent infinite loops caused by pages like photo galleries or catalogues.
|
|
|
|
|
|
h4(#auto-redundant_example). "Example":#auto-redundant_example
|
|
|
|
|
|
With ```--auto-redundant=2``` and given the following list of URLs:
|
|
|
```
|
|
|
http://test.com/?stuff=1
|
|
|
http://test.com/?stuff=2
|
|
|
http://test.com/?stuff=other-stuff
|
|
|
http://test.com/?stuff=blah
|
|
|
http://test.com/?stuff=blah&stuff2=1
|
|
|
http://test.com/?stuff=blah&stuff2=2
|
|
|
http://test.com/?stuff=blah2&stuff2=bloo
|
|
|
http://test.com/path.php?stuff=blah&stuff2=1
|
|
|
```
|
|
|
|
|
|
Only the following will be followed:
|
|
|
```
|
|
|
http://test.com/?stuff=1
|
|
|
http://test.com/?stuff=2
|
|
|
http://test.com/?stuff=blah&stuff2=1
|
|
|
http://test.com/?stuff=blah&stuff2=2
|
|
|
http://test.com/path.php?stuff=blah&stuff2=1
|
|
|
```
|
|
|
|
|
|
h3(#obey-robots-txt). "Obey robots.txt file ==(--obey-robots-txt)==":#obey-robots-txt
|
|
|
h3(#follow-subdomains). "Follow subdomains ==(-f/--follow-subdomains)==":#follow-subdomains
|
|
|
|
|
|
*Expects*: <n/a>
|
|
|
*Default*: disabled
|
|
|
*Multiple invocations?*: no
|
|
|
|
|
|
This flag will cause the crawler to respect the "robots.txt" file of the website under audit.
|
|
|
This flag will cause Arachni to follow links to subdomains.
|
|
|
|
|
|
h3(#depth). "Depth limit ==(--depth)==":#depth
|
|
|
|
... | ... | @@ -658,6 +714,39 @@ Tells Arachni to audit the HTTP headers of the page. |
|
|
*Note*: Header audits use brute force. Almost all valid HTTP request headers will be audited even if there's no indication that the web app uses them.
|
|
|
*Warning*: Enabling this option will result in increased requests, maybe by an order of magnitude.
|
|
|
|
|
|
h2(#coverage). "Coverage":#coverage
|
|
|
|
|
|
h3(#audit-cookies-extensively). "Audit cookies extensively ==(--audit-cookies-extensively)==":#audit-cookies-extensively
|
|
|
|
|
|
*Expects*: <n/a>
|
|
|
*Default*: disabled
|
|
|
*Multiple invocations?*: no
|
|
|
|
|
|
If enabled Arachni will submit all links and forms of the page along with the cookie permutations.
|
|
|
|
|
|
*Warning*: Will severely increase the scan-time.
|
|
|
|
|
|
h3(#fuzz-methods). "Fuzz methods ==(--fuzz-methods)==":#fuzz-methods
|
|
|
|
|
|
*Expects*: <n/a>
|
|
|
*Default*: disabled
|
|
|
*Multiple invocations?*: no
|
|
|
|
|
|
If enabled Arachni will submit all links and forms using both the _GET_ and _POST_
|
|
|
HTTP request methods.
|
|
|
|
|
|
*Warning*: Will severely increase the scan-time.
|
|
|
|
|
|
h3(#exclude-binaries). "Exclude binaries ==(--exclude-binaries)==":#exclude-binaries
|
|
|
|
|
|
*Expects*: <n/a>
|
|
|
*Default*: disabled
|
|
|
*Multiple invocations?*: no
|
|
|
|
|
|
Disables inclusion of binary HTTP response bodies in the audit.
|
|
|
|
|
|
*Note*: Binary content can confuse recon modules that perform pattern matching.
|
|
|
|
|
|
h2(#modules). "Modules":#modules
|
|
|
|
|
|
h3(#lsmod). "List modules ==(--lsmod)==":#lsmod
|
... | ... | @@ -1318,13 +1407,13 @@ h2(#cli_help_output). "CLI Help Output":#cli_help_output |
|
|
|
|
|
<pre><code>
|
|
|
$ arachni -h
|
|
|
Arachni - Web Application Security Scanner Framework v0.4.1 [0.2.5]
|
|
|
Arachni - Web Application Security Scanner Framework v0.4.1dev
|
|
|
Author: Tasos "Zapotek" Laskos <tasos.laskos@gmail.com>
|
|
|
<zapotek@segfault.gr>
|
|
|
|
|
|
(With the support of the community and the Arachni Team.)
|
|
|
|
|
|
Website: http://arachni.segfault.gr - http://github.com/Arachni/arachni
|
|
|
Documentation: http://github.com/Arachni/arachni/wiki
|
|
|
Website: http://arachni-scanner.com
|
|
|
Documentation: http://arachni-scanner.com/wiki
|
|
|
|
|
|
|
|
|
Usage: arachni [options] url
|
... | ... | @@ -1335,173 +1424,196 @@ Arachni - Web Application Security Scanner Framework v0.4.1 [0.2.5] |
|
|
General ----------------------
|
|
|
|
|
|
-h
|
|
|
--help output this
|
|
|
--help Output this.
|
|
|
|
|
|
-v be verbose
|
|
|
-v Be verbose.
|
|
|
|
|
|
--debug show what is happening internally
|
|
|
(You should give it a shot sometime ;)
|
|
|
--debug Show what is happening internally.
|
|
|
(You should give it a shot sometime ;) )
|
|
|
|
|
|
--only-positives echo positive results *only*
|
|
|
--only-positives Echo positive results *only*.
|
|
|
|
|
|
--http-req-limit concurrent HTTP requests limit
|
|
|
(Be careful not to kill your server.)
|
|
|
--http-req-limit=<integer> Concurrent HTTP requests limit.
|
|
|
(Default: 20)
|
|
|
(Be careful not to kill your server.)
|
|
|
(*NOTE*: If your scan seems unresponsive try lowering the limit.)
|
|
|
|
|
|
--http-harvest-last build up the HTTP request queue of the audit for the whole site
|
|
|
and harvest the HTTP responses at the end of the crawl.
|
|
|
(In some test cases this option has split the scan time in half.)
|
|
|
(Default: responses will be harvested for each page)
|
|
|
(*NOTE*: If you are scanning a high-end server and
|
|
|
you are using a powerful machine with enough bandwidth
|
|
|
*and* you feel dangerous you can use
|
|
|
this flag with an increased '--http-req-limit'
|
|
|
to get maximum performance out of your scan.)
|
|
|
(*WARNING*: When scanning large websites with hundreds
|
|
|
of pages this could eat up all your memory pretty quickly.)
|
|
|
--cookie-jar=<filepath> Netscape HTTP cookie file, use curl to create it.
|
|
|
|
|
|
--cookie-jar=<cookiejar> Netscape HTTP cookie file, use curl to create it
|
|
|
--cookie-string='<name>=<value>; <name2>=<value2>'
|
|
|
|
|
|
Cookies, as a string, to be sent to the web application.
|
|
|
|
|
|
--user-agent=<user agent> specify user agent
|
|
|
--user-agent=<string> Specify user agent.
|
|
|
|
|
|
--custom-header='<name>=<value>'
|
|
|
|
|
|
specify custom headers to be included in the HTTP requests
|
|
|
Specify custom headers to be included in the HTTP requests.
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
--authed-by=<who> who authorized the scan, include name and e-mail address
|
|
|
--authed-by=<string> Who authorized the scan, include name and e-mail address.
|
|
|
(It'll make it easier on the sys-admins during log reviews.)
|
|
|
(Will be appended to the user-agent string.)
|
|
|
|
|
|
--login-check-url=<url> A URL used to verify that the scanner is still logged in to the web application.
|
|
|
(Requires 'login-check-pattern'.)
|
|
|
|
|
|
--login-check-pattern=<regexp>
|
|
|
|
|
|
A pattern used against the body of the 'login-check-url' to verify that the scanner is still logged in to the web application.
|
|
|
(Requires 'login-check-url'.)
|
|
|
|
|
|
Profiles -----------------------
|
|
|
|
|
|
--save-profile=<file> save the current run profile/options to <file>
|
|
|
--save-profile=<filepath> Save the current run profile/options to <filepath>.
|
|
|
|
|
|
--load-profile=<file> load a run profile from <file>
|
|
|
--load-profile=<filepath> Load a run profile from <filepath>.
|
|
|
(Can be used multiple times.)
|
|
|
(You can complement it with more options, except for:
|
|
|
* --mods
|
|
|
* --redundant)
|
|
|
|
|
|
--show-profile will output the running profile as CLI arguments
|
|
|
--show-profile Will output the running profile as CLI arguments.
|
|
|
|
|
|
|
|
|
Crawler -----------------------
|
|
|
|
|
|
-e <regex>
|
|
|
--exclude=<regex> exclude urls matching regex
|
|
|
-e <regexp>
|
|
|
--exclude=<regexp> Exclude urls matching <regexp>.
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
-i <regex>
|
|
|
--include=<regex> include urls matching this regex only
|
|
|
-i <regexp>
|
|
|
--include=<regexp> Include *only* urls matching <regex>.
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
--redundant=<regex>:<count> limit crawl on redundant pages like galleries or catalogs
|
|
|
(URLs matching <regex> will be crawled <count> amount of times.)
|
|
|
--redundant=<regexp>:<limit>
|
|
|
|
|
|
Limit crawl on redundant pages like galleries or catalogs.
|
|
|
(URLs matching <regexp> will be crawled <limit> amount of times.)
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
-f
|
|
|
--follow-subdomains follow links to subdomains (default: off)
|
|
|
--auto-redundant=<limit> Only follow <limit> amount of URLs with identical query parameter names.
|
|
|
(Default: inf)
|
|
|
(Will default to 10 if no value has been specified.)
|
|
|
|
|
|
--obey-robots-txt obey robots.txt file (default: off)
|
|
|
-f
|
|
|
--follow-subdomains Follow links to subdomains.
|
|
|
(Default: off)
|
|
|
|
|
|
--depth=<number> depth limit (default: inf)
|
|
|
--depth=<integer> Directory depth limit.
|
|
|
(Default: inf)
|
|
|
(How deep Arachni should go into the site structure.)
|
|
|
|
|
|
--link-count=<number> how many links to follow (default: inf)
|
|
|
--link-count=<integer> How many links to follow.
|
|
|
(Default: inf)
|
|
|
|
|
|
--redirect-limit=<number> how many redirects to follow (default: 20)
|
|
|
--redirect-limit=<integer> How many redirects to follow.
|
|
|
(Default: 20)
|
|
|
|
|
|
--extend-paths=<file> add the paths in <file> to the ones discovered by the crawler
|
|
|
--extend-paths=<filepath> Add the paths in <file> to the ones discovered by the crawler.
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
--restrict-paths=<file> use the paths in <file> instead of crawling
|
|
|
--restrict-paths=<filepath> Use the paths in <file> instead of crawling.
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
|
|
|
Auditor ------------------------
|
|
|
|
|
|
-g
|
|
|
--audit-links audit link variables (GET)
|
|
|
--audit-links Audit links.
|
|
|
|
|
|
-p
|
|
|
--audit-forms audit form variables
|
|
|
(usually POST, can also be GET)
|
|
|
--audit-forms Audit forms.
|
|
|
|
|
|
-c
|
|
|
--audit-cookies audit cookies (COOKIE)
|
|
|
--audit-cookies Audit cookies.
|
|
|
|
|
|
--exclude-cookie=<name> Cookie to exclude from the audit by name.
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
--exclude-cookie=<name> cookies not to audit
|
|
|
(You should exclude session cookies.)
|
|
|
--exclude-vector=<name> Input vector (parameter) not to audit by name.
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
--audit-headers audit HTTP headers
|
|
|
--audit-headers Audit HTTP headers.
|
|
|
(*NOTE*: Header audits use brute force.
|
|
|
Almost all valid HTTP request headers will be audited
|
|
|
even if there's no indication that the web app uses them.)
|
|
|
(*WARNING*: Enabling this option will result in increased requests,
|
|
|
maybe by an order of magnitude.)
|
|
|
|
|
|
Coverage -----------------------
|
|
|
|
|
|
--audit-cookies-extensively Submit all links and forms of the page along with the cookie permutations.
|
|
|
(*WARNING*: This will severely increase the scan-time.)
|
|
|
|
|
|
--fuzz-methods Audit links, forms and cookies using both GET and POST requests.
|
|
|
(*WARNING*: This will severely increase the scan-time.)
|
|
|
|
|
|
--exclude-binaries Exclude non text-based pages from the audit.
|
|
|
(Binary content can confuse recon modules that perform pattern matching.)
|
|
|
|
|
|
Modules ------------------------
|
|
|
|
|
|
--lsmod=<regexp> list available modules based on the provided regular expression
|
|
|
--lsmod=<regexp> List available modules based on the provided regular expression.
|
|
|
(If no regexp is provided all modules will be listed.)
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
|
|
|
-m <modname,modname..>
|
|
|
--mods=<modname,modname..> comma separated list of modules to deploy
|
|
|
(Modules are referenced by their filename without the '.rb' extension, use '--lsmod' to see all.
|
|
|
Use '*' as a module name to deploy all modules or inside module names like so:
|
|
|
xss_* to load all xss modules
|
|
|
sqli_* to load all sql injection modules
|
|
|
--modules=<modname,modname..>
|
|
|
|
|
|
Comma separated list of modules to load.
|
|
|
(Modules are referenced by their filename without the '.rb' extension, use '--lsmod' to list all.
|
|
|
Use '*' as a module name to deploy all modules or as a wildcard, like so:
|
|
|
xss* to load all xss modules
|
|
|
sqli* to load all sql injection modules
|
|
|
etc.
|
|
|
|
|
|
You can exclude modules by prefixing their name with a dash:
|
|
|
You can exclude modules by prefixing their name with a minus sign:
|
|
|
--mods=*,-backup_files,-xss
|
|
|
The above will load all modules except for the 'backup_files' and 'xss' modules.
|
|
|
|
|
|
Or mix and match:
|
|
|
-xss_* to unload all xss modules. )
|
|
|
-xss* to unload all xss modules.)
|
|
|
|
|
|
|
|
|
Reports ------------------------
|
|
|
|
|
|
--lsrep list available reports
|
|
|
--lsrep=<regexp> List available reports based on the provided regular expression.
|
|
|
(If no regexp is provided all reports will be listed.)
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
--repload=<file> load audit results from an .afr file
|
|
|
--repload=<filepath> Load audit results from an '.afr' report file.
|
|
|
(Allows you to create new reports from finished scans.)
|
|
|
|
|
|
--report='<report>:<optname>=<val>,<optname2>=<val2>,...'
|
|
|
|
|
|
<report>: the name of the report as displayed by '--lsrep'
|
|
|
(Reports are referenced by their filename without the '.rb' extension, use '--lsrep' to see all.)
|
|
|
(Reports are referenced by their filename without the '.rb' extension, use '--lsrep' to list all.)
|
|
|
(Default: stdout)
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
|
|
|
Plugins ------------------------
|
|
|
|
|
|
--lsplug list available plugins
|
|
|
--lsplug=<regexp> List available plugins based on the provided regular expression.
|
|
|
(If no regexp is provided all plugins will be listed.)
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
--plugin='<plugin>:<optname>=<val>,<optname2>=<val2>,...'
|
|
|
|
|
|
<plugin>: the name of the plugin as displayed by '--lsplug'
|
|
|
(Plugins are referenced by their filename without the '.rb' extension, use '--lsplug' to see all.)
|
|
|
(Plugins are referenced by their filename without the '.rb' extension, use '--lsplug' to list all.)
|
|
|
(Can be used multiple times.)
|
|
|
|
|
|
|
|
|
Proxy --------------------------
|
|
|
|
|
|
--proxy=<server:port> specify proxy
|
|
|
--proxy=<server:port> Proxy address to use.
|
|
|
|
|
|
--proxy-auth=<user:passwd> specify proxy auth credentials
|
|
|
--proxy-auth=<user:passwd> Proxy authentication credentials.
|
|
|
|
|
|
--proxy-type=<type> proxy type can be http, http_1_0, socks4, socks5, socks4a
|
|
|
--proxy-type=<type> Proxy type; can be http, http_1_0, socks4, socks5, socks4a
|
|
|
(Default: http)
|
|
|
|
|
|
</code></pre> |