By now, we all know that VDP by default can take backup of 8 VMs in parallel. This is because, VDP internally configures 8 proxies while deployment. User do not really have any control here to limit the number of proxies.
So you have a backup job created to backup 20 VMs, VDP will pick 8 VMs at a time for backup. So at any given time VDP cannot backup more than 8 VMs in parallel.
The maximum no of active proxies is 8. In any way, it cannot go beyond 8.
But can it be re-configured to 3 ? Well, I would need it to limit my resource consumption. I am running a bit low configuration servers.
Though there is no direct and officially recommended way, I have done a small experiment couple of weeks ago to re-configure the proxies to 3 by running 2 simple steps.
*Note – Before you start, make sure to disable all the backup jobs. You can enable them back once you are done with reconfiguration. Ideally I would prefer doing it in backup window as I have control on it.
Steps –
- SSH to the VDP appliance
- cd /usr/local/avamarclient/etc/
- ./registerproxy.sh
- === Client Registration and Activation ===This script will register and activate the client with the Administrator server.Enter the Avamar server domain [clients]: <Press ENTER here>Number of proxy clients (1 to 8) [8]: 3 <Enter the number of proxies to limit. Max is 8>
You are done now. The appliance will take of the rest.
Output from the script for your reference>>>>>>>>>>
SCRIPT : /usr/local/avamarclient/etc/avagent.d
PARAMETERS : register 127.0.0.1 clients
Avamar avagent configuration:
Avamar home : /usr/local/avamarclient
avagent : /usr/local/avamarclient/bin/avagent-vmware.bin
list of agents : /usr/local/avamarclient/etc/avagent-list
common options : /usr/local/avamarclient/etc/avagent-flags
library path : /usr/local/avamarclient/lib:/usr/lib/vmware-vix-disklib/lib64 :
*** vApp .pin missing ***
Setting number of instances to 0.
Number of vapp proxies : 0
Number of file proxies : 3
Number of image proxies : 3
Configured agents:
name vardir pin
—- —— —
proxy-1 var-proxy-1 vmwfile-linux.pin,vmwfile-windows.pin,vc bimage-linux.pin,vcbimage-windows.pin,vdrmigration.pin,nwcomm.pin,unix.pin
proxy-2 var-proxy-2 vmwfile-linux.pin,vmwfile-windows.pin,vc bimage-linux.pin,vcbimage-windows.pin,vdrmigration.pin,nwcomm.pin
proxy-3 var-proxy-3 vmwfile-linux.pin,vmwfile-windows.pin,vc bimage-linux.pin,vcbimage-windows.pin,vdrmigration.pin,nwcomm.pin
proxy-4 var-proxy-4 (none)
proxy-5 var-proxy-5 (none)
proxy-6 var-proxy-6 (none)
proxy-7 var-proxy-7 (none)
proxy-8 var-proxy-8 (none)
…. all avagents stopped
Common agent settings:
–informationals
–sysdir=/usr/local/avamarclient/etc
–bindir=/usr/local/avamarclient/bin
–flagfile=”/usr/local/avamarclient/var/avagentAll.cmd”
–mcsaddr=127.0.0.1
–dpndomain=clients
avagent Info <5008>: Logging to /usr/local/avamarclient/var-proxy-1/avagent.log
avagent Info <5174>: – Reading /usr/local/avamarclient/var-proxy-1/avagent.cmd
avagent Info <5174>: – Reading /usr/local/avamarclient/etc/avagent-flags
avagent Info <5008>: Logging to /usr/local/avamarclient/var-proxy-2/avagent.log
avagent Info <5174>: – Reading /usr/local/avamarclient/var-proxy-2/avagent.cmd
avagent Info <5174>: – Reading /usr/local/avamarclient/etc/avagent-flags
avagent Info <5008>: Logging to /usr/local/avamarclient/var-proxy-3/avagent.log
avagent Info <5174>: – Reading /usr/local/avamarclient/var-proxy-3/avagent.cmd
avagent Info <5174>: – Reading /usr/local/avamarclient/etc/avagent-flags
avagent Info <5008>: Logging to /usr/local/avamarclient/var-proxy-4/avagent.log
avagent Info <5174>: – Reading /usr/local/avamarclient/var-proxy-4/avagent.cmd
avagent Info <5174>: – Reading /usr/local/avamarclient/etc/avagent-flags
avagent Info <5008>: Logging to /usr/local/avamarclient/var-proxy-5/avagent.log
avagent Info <5174>: – Reading /usr/local/avamarclient/var-proxy-5/avagent.cmd
avagent Info <5174>: – Reading /usr/local/avamarclient/etc/avagent-flags
avagent Info <5008>: Logging to /usr/local/avamarclient/var-proxy-6/avagent.log
avagent Info <5174>: – Reading /usr/local/avamarclient/var-proxy-6/avagent.cmd
avagent Info <5174>: – Reading /usr/local/avamarclient/etc/avagent-flags
avagent Info <5008>: Logging to /usr/local/avamarclient/var-proxy-7/avagent.log
avagent Info <5174>: – Reading /usr/local/avamarclient/var-proxy-7/avagent.cmd
avagent Info <5174>: – Reading /usr/local/avamarclient/etc/avagent-flags
avagent Info <5008>: Logging to /usr/local/avamarclient/var-proxy-8/avagent.log
avagent Info <5174>: – Reading /usr/local/avamarclient/var-proxy-8/avagent.cmd
avagent Info <5174>: – Reading /usr/local/avamarclient/etc/avagent-flags
…. registered all agents!
…. Check the MCS status – Tue Nov 19 23:38:56 PST 2013
Identity added: /home/dpn/.ssh/dpnid (/home/dpn/.ssh/dpnid)
dpnctl: INFO: MCS status: up.
…. MCS appears to be ready.
avagent Info <5008>: Logging to /usr/local/avamarclient/var-proxy-1/avagent.log
avagent Info <5174>: – Reading /usr/local/avamarclient/var-proxy-1/avagent.cmd
avagent Info <5174>: – Reading /usr/local/avamarclient/etc/avagent-flags
avagent Info <5417>: daemonized as process id 13180
avagent Info <5008>: Logging to /usr/local/avamarclient/var-proxy-2/avagent.log
avagent Info <5174>: – Reading /usr/local/avamarclient/var-proxy-2/avagent.cmd
avagent Info <5174>: – Reading /usr/local/avamarclient/etc/avagent-flags
avagent Info <5417>: daemonized as process id 13186
avagent Info <5008>: Logging to /usr/local/avamarclient/var-proxy-3/avagent.log
avagent Info <5174>: – Reading /usr/local/avamarclient/var-proxy-3/avagent.cmd
avagent Info <5174>: – Reading /usr/local/avamarclient/etc/avagent-flags
avagent Info <5417>: daemonized as process id 13192
…. agents started.
Registration Complete.
I enabled my backup job to run ‘Backup Now’. I could see my VDP appliance now picking up only 3 VMs at a time for backup instead of 8.
Running my VDP appliance without any issues since couple of weeks.