No doubt every engineer has their own twist on coding something to better automate configurations and deployment on networks; however, with the every increasing pace of release changes to current software sets installed on some vendors hardware, the workload to keep your scripts updated can become your full time job. There will always be two schools of engineer: the home brew and the purchased software schools, each one with their own compelling reason to use the other and why the other is wrong. I, personally, prefer the purchased software route with a small dash of home brew scripts to accomplish my job, very small. I’ll outline some experiences I’ve had in the past where both moving towards the use of purchased software solved the many problems the home brew scripts were giving us and how a small, but powerful, set of home brew scripts gave us complete control over the network from building, deploying, operating, and debugging.
I worked in a network that contained probably close to 1000 end nodes, globally dispersed and growing. Current operations utilized hand built scripts, both textual scripts and perl based scripts, to deploy network equipment to the new offices. When the text based configurations and the perl scripts were first written, they were second to nothing you could have used; however, it wasn’t long before new code releases, new protocols, and an overall better, and entirely different, network topology crippled the perl scripts ability to deploy and maintain the network without hours of reworking the scripts, something no one was paid to do. However, there were many, and I do mean many, tools available to them to automate the deployment and sanity checking of the network. I will not name any particular software; however, there are plenty that can do this, some vendor specific, others vendor neutral, choose the one that best suits your network infrastructure. When the choice of which software was made the ability to deploy new devices, check and update code, sanity check all configurations, update and push configurations that were missing, and operate the network really didn’t change much; however, when a new code release was issued, a command deprecated, output fields changed, or any other randomly changing variable, there was zero work involved to recode software, we just set the expectations to be different and that was it; thus, saving hundreds of man hours originally spent on coding and placing those back into the network.
Now, where the purchased software saved the headache of trying to update code on home grown software to both deploy correct configurations, prevent false positives, and locate real issues, it also lacked a few things we were wishing for; thus, the scripts came back to the rescue…the kicker, we used the tool to manage the network to deploy those scripts! The extremely specific information’s output was classified as either: never changes, rarely changes, or the change can be mirrored in the script with ease. These scripts provided the “last mile” for deployment, testing, sanity checking, and operating the network; however, there were not very many scripts and the scripts themselves were lightweight and consisted of very few lines of code.
In the end it is you who needs to decide what is best for your network, not some slick salesman trying to shovel crap down your throat and neither should your (or others around you) ego about “I can code that myself” get in the way of productivity and operating your network by providing the most optimized time to your employer (and not to mention the work/life balance). There is a balance which must be achieved and I am not willing to say “one size fits all” either. There are some very specific cases where home brew outshines any product avialable, or perhaps there isn’t a product available to do what you’re looking to do? In the latter case, code it up and decide if you’ll sell it or release the code to an Open Source project.