It’s no secret that in the past several years there has been a huge push to “cloud-based services.” These services provide corporations with the promise of reduced operational costs, CPU/memory and OSs on demand to combat having underutilized equipment, storage as needed and the capabilities of many things that companies used to have to architect individually.

Applications that were previously installed on a per-enterprise basis are now offered in multi-tenant architectures with a host of providers competing for business. Whether it’s putting their own apps in cloud services, such as AWS and Azure, to offering SaaS applications, the cloud is the new frontier in a landscape that was previously comprised of decentralized computing, mainframes, MPLS and any new technology that someone came up with over the last 20 years.

If you have spent any time in the industry over that time period, you would recognize that every few years, someone comes up with an idea that then becomes a “must have” technology or service. People start using that technology and eventually, a new idea comes, and the cycle starts all over again.

The most recent trend I have seen is the drive for cloud-based services for networking and/or automation technologies. This is an area in which I have a lot of experience, having spent the last 20-plus years consulting.

The challenge faced by hardware manufacturers, as applications moved to the cloud, shifted from traditional datacenter builds and large hardware purchases in what seemed like a blink of an eye. The gap created by the decline of most of the traditional revenue streams had to be replaced, as the trend in compute, for years, has been to virtualize and/or abstract these technologies.

All of a sudden, we started seeing vendors either trying to invent/acquire products to manage their equipment, or some companies started to build these solutions with a cloud-based control model in mind.
Whichever path was chosen, the heart of the effort was clear: we must make it so simple that anyone can do it. No longer do we need to know how to ACTUALLY DESIGN ANYTHING, we will just give you some shiny buttons to click and everything will just work.

Well, therein begins the problem. People if given the chance, believe most of what they are told. As such, I have seen more things not work or not be possible because of these marketing statements.

The new and upcoming generation of kids was raised like this, so it’s no surprise. You want to find anything out, ask Google or Alexa or Siri and you don’t even have to do any research. You want to live-video yourself walking down the street, no problem; instantly know what all your friends are doing at any given time – – it’s that simple.

The net effect of these things is that many contemporary technical professionals don’t know how to design anything because they don’t NEED to know. There’s nothing more annoying than that statement to me, because when you talk to someone about what they are trying to do, and they believe all the hype surrounding some of these promised functions, they are incredulous when you ask them questions that actually make sense.

Questions like compatibility, security implications, application function, bloat, and—the best one of all—interoperability, are met with the assumption that, “well, if it worked in my old traditional network, so it must just work in this new technology, because it’s so simple”.

So many times, in the last couple of years, we have been asked to work on some type of hybrid network. When you look at what is trying to be done, and it won’t work, you often get the “incredulous look.” Why is that? Because “simplification marketing” has made them believe that there’s never any reason for things to not work anymore, and a lot of people have less and less background in how things actually do work.

This is not meant to bash the modernization of our industry. In fact, a lot of the newer technologies are very slick. It’s more of a caution to those who would believe anything that they are told. If you can’t ask the right questions, you could be in for some heartache.

Case number 1
Customer comes to me and tells me they are redesigning their current network consisting of Juniper Netscreen firewalls and replacing them with Meraki. After looking at the requirements, I tell the customer that they have three Internet connections coming in, and ask them how they think they are going to handle that. Meraki supports two uplink Internet connections. The customer replies, “What do you mean? I already bought all the equipment, why didn’t they tell me?” I then explain that, typically, we get brought in before the customer makes the purchase for this exact reason. So now, the customer is in a quandary, because they purchased equipment that they were told was so easy to use that for it to not be able to work in their environment was unbelievable. In the end, the solution they purchased blindly would not meet their requirements and, as such, created a nightmare for the client and was never deployed in the manner it was intended to be.

Case number 2
Customer decides to use a cloud-based provider for access to their application. The application works over a VPN tunnel to the cloud (Azure in his case). Customer expects to have redundant connections to the provider, as they have two Internet links. Azure, however, cannot do this since the customer has two disparate IP ranges on the ISP links and, as such, there is no mechanism in place for dynamic failover. Customer cannot deploy the solution as they intended because they assumed this would not be a problem since it’s “the cloud.”

Case number 3
Customer decides that, instead of traditional file shares, they want to use a cloud-based service. They choose Box as the provider and start rolling it out. Suddenly everything on their Internet connections is old-dog slow! We are asked to look at it and it quickly becomes apparent that, as more users are using Box and sharing the same folders, the Internet bandwidth use jumped over 200% from what it was previously. When this was brought to the attention of the customer, the reply was that they didn’t understand why, and how can we stop that from happening.

Case number 4
Customer purchases a wireless solution with on-premise controllers. When asked, the customer insists that they are going to deploy the solution on their own, and that they do not need our help. On the day of the cutover, we get a call and the entire hospital (yes, a whole hospital!) is down. As it turned out, the engineer deploying the solution had no experience with it and just input what I will term as a “default configuration,” in which the access points tunnel all traffic back to the main controller before it exits the network. By doing this, he/she created a situation where traffic in site A that was on wireless, would go back to site B where the controller was, and then traverse back to site A where the resources were, and then back to site B to the controller, and then back to site A where the originating request was!

My point: even with the advent of all of the new technology out there, you still have to be aware of what’s under the covers, because things are not as simple as they may seem, and the mistakes you make may cost you more than what you think you’re saving.

There is a way to design the environment based on needs, and still leverage cost-saving new technologies where they make sense. We help customers with this every day.