Capacity Planning in Marketing

Know Your Limitations

In his article What Is a Constraint in Marketing?[1], John DuBois claims, “A constraint limits or holds back the possible success of an operational strategy. The theory of constraints, an organizational change method focused on process improvement, contends that every organization must face at least one constraint, or weak link in the process chain. In regard to marketing, constraints may affect product, price, place or promotion.”

In his book The Goal[2], Dr. Eliyahu Goldratt conceived the Theory of Constraints (TOC), which prioritizes improvement activities and produces a highly focused methodology for generating rapid enhancement that has the following benefits:

  • Increased profit
  • Fast improvement
  • Improved capacity
  • Reduced lead times
  • Reduced inventory

TOC is a good methodology to apply to a marketing system and it can help CMOs understand their departments, their systems, and their data in a way that will help them make better and more profitable decisions. Whereas the primary focus of the TOC is to rapidly improve constrained processes as only that will result in achieving more profit, it’s a waste of time optimizing non-constrained processes and systems as they won’t produce any significant benefits, financial or otherwise.

In his article 8 Steps to Creating an Effective Marketing Information System, Ira Kalb claims “The primary way a CMO can prove his or her worth is to collect the data on the return the company is realizing on its marketing investment. To do that, a comprehensive marketing information system is required.”[3] In a perfect world, Kalb argues, an MIS system would be integrated into all of a business’s systems and processes.3 In this rather idealistic world, every sale, lead, and marketing offer would be traced back to the marketing effort that produced it.3 Every dollar of marketing spend could be valued and quantified. Every compliment or complaint could be tracked back to its source as well, bringing new measurable value to social media as well.3 This is all a worthy endeavor, of course, but not easy to do.

“Rather than wait for the dream to materialize, marketers need to improvise. They need a system that enables them to (1) make better decisions and (2) support those decisions with verifiable data,” says Kalb.3 Systems should be checked and itemized, with marketing information data gleaned from them.3 A gap analysis should be initiated to identify what information isn’t being provided to the marketers in the current system.3 Additional systems that can provide the needed marketing information should also be created.3 These systems should then be integrated company-wide, if possible, as long as this process isn’t prohibitive, says Kalb.3

“We are drowning in information but starved for knowledge,”[4] said John Naisbitt. His quote is probably more relevant today than it was when he wrote it forty years ago. Big Data alone has its seven V’s – Volume, Velocity, Variety, Variability, Veracity, Visualization, and Value. Volume is growing exponentially. Data collection, correlation, and data use are not only massively speeding up but going real-time, which is becoming a minimum requirement for many IT departments these days because their customers require nothing less.

With the ability to collect, append, as well as send to or receive data from almost anywhere in the world, the variety and variability are increasing substantially. The veracity or accuracy of data is also getting easier to understand and ensure as many data integration tools have data cleansing and data verification tools build in. Visualization of data is exceptionally easy today because of business intelligence tools like Domo, PowerBI, Qlik, and Tableau, which have simplified the creation of dashboards. Five of the top ten most valuable companies in American are companies whose lifeblood is data, so the growing value of data is unquestionable.

However, never before have we had so many ways to collect, track, quantify, metatag, and visualize data yet also be so ignorant of how to use it. Credit card companies collect every penny we spend. Social media companies gather every like and dislike we make. IoT devices capture sensor data of all kinds and there will soon be an explosion of devices collecting this data. Gyroscopes and accelerometers collect every physical movement of our phones, while mobile apps collect highly valuable personal information, including our every location. Never before have so few tracked so many, while also profiting so handsomely from so much captured data.

This data collection should be a godsend for companies but often they don’t know how to collect and use it. The Flexera 2020 State of the Cloud Report[5] showed that organizations are wasting 30 percent of their cloud spend, paying too much for services they just don’t need. That’s a huge sum of money being flushed away, to say nothing of the unneeded energy going to waste. Going green is a competitive advantage today so this is an opportunity going to waste, literally. “Some of the increase is a result of the extra capacity needed for current cloud-based applications to meet increased demand as online usage grows,” said the survey’s authors.5 “Other organizations may accelerate migration from data centers to cloud in response to reduced headcount, difficulties in accessing data center facilities and delays in hardware supply chains.”5

The American marketing author Philip Kotler[6] defines a marketing information system (MIS) as a “continuing and interacting structure of people, equipment and procedures to gather, sort, analyse, evaluate, and distribute pertinent, timely and accurate information for use by marketing decision makers to improve their marketing planning, implementation, and control.” One of the tools that can help with control is capacity planning, which aims to minimize the discrepancy between the capacity of an organization and the demands of its customers. As Zoltán Sebestyén and Viktor Juhász claim in their paper The Impact Of The Cost Of Unused Capacity On Production Planning Of Flexible Manufacturing Systems[7], “capacity is one of the most important measures of resources used in production. Its definition and analysis are therefore one of the key areas of production management.”

Demand for an organization’s capacity varies based on changes in production output, such as increasing or decreasing the production quantity of an existing product or producing new products. Better utilization of existing capacity can be accomplished through improvements in overall equipment effectiveness. Capacity can be increased by introducing new techniques, equipment, and materials, increasing the number of workers or machines, increasing the number of labor shifts, or acquiring additional production facilities. In a nutshell, capacity planning aims to reduce unnecessary resources while utilizing necessary resources at full optimization.

Capacity planning works hand-in-hand with Kalb’s condition that every dollar spent on marketing be valued and quantified.3 It attempts to ensure there is as little wastage as possible when it comes to the IT estate, including with a marketing department’s IT activities. The question is, can marketing’s IT activities be quantified so that every sale, lead, marketing offer, and the cost of the employees and software handling these activities be traced back to the marketing effort that produced them? This would help enormously with ROI justifications.

In his article, ROI Valuation, The IT Productivity GAP[8], Erik Brynjolfsson states, “The critical question facing IT managers today is not ‘Does IT pay off?’ but rather, ‘How can we best use computers?’” “Even when their IT intensity is identical, some companies have only a fraction of the productivity of their competitors,” notes Brynjolfsson.8 Unlike a certificate of deposit, an investment in IT doesn’t produce an expected or guaranteed rate of return.8 There are so many moving parts to an IT estate, the software running atop it, to say nothing of the people overseeing it, so many variables that interact with each other, so many unintended demand spikes, so many moving – or virtually moving – parts that it’s almost impossible to game out a definitive return on investment for many costs in IT.

Besides that, “IT is only the tip of a much larger iceberg of complementary investments that are the real drivers of productivity growth,” contends Brynjolfsson.8 “In fact, our research found that for every dollar of IT hardware capital that a company owns, there are up to $9 of IT-related intangible assets, such as human capital — the capitalized value of training — and organizational capital — the capitalized value of investments in new business-process and other organizational practices. Not only do companies spend far more on these investments than on computers themselves, but investors also attach a larger value to them”8

Brynjolfsson cautions, “Too often, the flow of information speeds up dramatically in highly automated parts of the value chain only to hit logjams elsewhere, particularly where humans must get involved and processes aren’t updated. The result is little or no change in overall performance. A gigabit Ethernet network does no good if the real bottleneck is a manager’s ability to read and act on the information.”8 “In the information economy, the scarce resource is not information, but the capacity of humans to process that information,” warns Brynjolfsson.8

Monitoring and Measuring Everything

Man’s role as “The best condition monitoring device ever invented,” as Plant & Works Engineering[9] calls it is surely under threat today by tools like business process management (BPM) software, robotic process automation (RPA), hyperautomation, artificial intelligence operations (AIOps), and real-time monitoring. The experience of an IT technician who intimately understands every nuance of a system he has been working with for years is, of course, priceless, but today’s monitoring and alerting systems can go further than man as they can help systems self-heal, as AIOps claims to do.

Gartner defines AIOps as a platform that utilizes “big data, modern machine learning and other advanced analytics technologies to, directly and indirectly, enhance IT operations (monitoring, automation and service desk) functions with proactive, personal and dynamic insight. AIOps platforms enable the concurrent use of multiple data sources, data collection methods, analytical (real-time and deep) technologies, and presentation technologies.”[10] AIOps analyzes a system’s data, learning about a company’s day-to-day operations function, including marketing, then either fix current issues it finds or even proactively attempts to fix potential issues that it sees coming down the pipe.

In his article The State of AIOps: Understanding the Difference Between Tools[11], Trent Fitz says, “modern apps are comprised of millions of containers and serverless functions strewn across multiple clouds, and each one of these application components may exist for days or less than a second. Stitching all of this information together while trying to find outliers is magnitudes more difficult than trying to isolate a rogue Java thread on a typical application server.”

It’s an enormous task monitoring memory usage, CPU utilization, as well as overseeing the spin up and spin down of clusters, observing server fan speeds, and ensuring error alerts need checking rather than ignoring them as they and are simply superfluous. Figure 1 shows the monitoring setup of a typical data center.

 

Figure 1: Monitoring a data center

Source: ITRS

But help is on its way. In his article How AIOps is already transforming IT[12], Atul Soneja explains the AIOps process when alerts occur – “The AIOps solution automatically opens the ticket and enriches it with log information, events, and metrics before directing it to the right person. Now, all the information is already there, and IT knows what to do with it. All of this is handled automatically behind the scenes, so teams never have to close a ticket manually again.”

Today’s AIOps solutions collect metrics and logs, collate event streams with dependency data, and deliver end-to-end three-dimensional windows into a company’s operations system. As Trent Fitz sees it, “This means eliminating the No. 1 problem AIOps tools have experienced thus far, i.e., limited visibility and context due to the lack of cardinality in the data they’re analyzing.”11 This not only enhances the system’s ability to understand a problem and initiate a solution but also to better quantify the entire operation in an activity-based costing way, i.e., gaining such granular detail on data and its use that it might be possible to collate and quantify data use within all of a company’s departments.

Cloudbursting

For Sebestyén and Juhász, “There are three aspects of the problem of conventional capacity measures: the absence of economic content, quantity based approach, and the unduly high emphasis laid on technical processes.”7 Sebestyén and Juhász believe that “If capacity measures could side step the problems discussed above, i.e. if they could include the value of resources, and could refer to the costs of unused capacity, then better decisions could be made in a number of cases.”7 “Changes in the nature of production, and the enhanced significance of auxiliary processes made calculations necessary for production and service systems where processes are difficult to quantify,” argue Sebestyén and Juhász.7

Adding a cloud component to a company’s IT department is almost a given these days, but services like AWS, Azure, Google Cloud, Alicloud, and other cloud providers can be pricey. Managed cloud services are even worse. In his paper Determining an optimal mix of hybrid cloud computing for enterprises[13], I Lee argues that:

“Capacity planning is a challenging task when there is an unpredictable, fluctuating computing demand with many peaks and troughs. Without a solid evaluation model, estimating the tradeoff between the benefits and costs incurred in order to cover peak computing demand is challenging. Therefore, overcapacity or under-capacity is a common phenomenon in the investment of cloud capacity. Overcapacity puts companies at a cost disadvantage due to a low utilization of cloud resources. On the other hand, under-capacity puts them at a strategic disadvantage due to customer/user dissatisfaction, high penalty costs, and potential sales loss.”

Araujo et al. believe “The efficient and accurate assessment of cloud-based infrastructure is essential for guaranteeing both business continuity and uninterrupted services for computing jobs.”[14] However, López-Pires and Barán argue this is easier said than done because the efficient resource management of cloud infrastructures is highly challenging.[15] For Lee, most capacity planning and management area studies focus on micro-level scheduling such as dynamic resource allocation and prioritization of computing jobs.13 Widely used resource management methods like AWS’s Auto Scaling and Azure’s Autoscaling Application Block are both reactive and these monitoring devices are provided by companies in a fox-guarding-the-henhouse kind of way, i.e., their profits are based on usage so would they really be the best companies to provide applications that recommend use limitations?13

Balaji, Kumar, and Rao contend “While these reactive approaches are an effective way to improve system availability, optimize costs, and reduce latency, it exhibits a mismatch between resource demands and service provisions, which could lead to under or over provisioning.”[16] Several authors, including Wang, Hung, and Yang[17] , Han, Chan, and Leckie[18], and Deng, Lu, Fang, Wu[19] recommend predictive resource scaling approaches that can overcome the limitations of this reactive approach.

In their article Cost benefits of flexible hybrid cloud storage: mitigating volume variation with shorter acquisition cycle[20], Laatikainen, Mazhelis, and Tyrvainen believe the hybrid cloud can reduce the financial burden of overcapacity investment and technological risks related to a full ownership of computing resources as well as allow companies to operate at a cost-optimal scale and scope under demand uncertainty.

With a hybrid cloud companies can scale their computing requirements beyond the private cloud and into the public cloud – a capability also known as cloud bursting.[21] An application runs in its own private resources for most of its computing needs and then bursts into a public cloud when its private resources lack sufficiency for the surges in computing demand.21 For example, a popular and cost-effective way to deal with the temporary computational demand of big data analytics is a hybrid cloud bursting that leases temporary off-premise cloud resources to boost the overall capacity during peak utilization.[22]

While potential benefits of the hybrid cloud arise in the presence of variable demand for many real-world computing workloads, additional costs related to hybrid cloud management, data transfer, and development complexity must be considered.[23] Everything from bandwidth, latency, location of data, and communication performance need to be considered when integrating a public cloud with a private cloud.[24]

Since each cloud provider has its own propriety system, there are no standardized solutions challenges and cloud users must integrate diverse cloud services obtained from multiple cloud providers and then perform cloud bursting in the hybrid cloud environment. While various standardized solutions have been developed for diverse cloud computing services, cloud providers often develop their own proprietary services as a way to lock in clients, differentiate their services, and achieve a market monopoly in the early stages of innovation.[25]

According to Forrester[26], in 2018, cloud computing became a must-have technology for every enterprise. I. Lee claims “Nearly 60% of North American enterprises are using some type of public cloud platform. Furthermore, private clouds are also growing fast, as companies not only move workloads to the public cloud but also develop on-premises private cloud in their own data centers.”13 Lee argues the corporate adoption of hybrid cloud computing is an irreversible trend because the demand for big data, smartphones, and the Internet of Things (IoT) technologies will not be receding any time soon.13

Capacity Management: A Solution

Capacity planning attempts to reduce constraints within the IT estate. The Theory of Constraints “seeks to provide a precise and sustained focus on improving the current constraint until it no longer limits throughput, at which point the focus moves to the next constraint.”[27] The Theory of Constraints lays out a five-step process known as the Five Focusing Steps for identifying and eliminating constraints (see Figure 2).

Figure 2: The Theory of Constraints uses a process known as the Five Focusing Steps to identify and eliminate constraints (i.e., bottlenecks).27

The key to capacity management is counterbalancing the right number of users with the right performance at peak usage to ensure a great end-user experience. “Demand drives stress,” claims VMWare, the cloud services provider.[28] “Because the demand for capacity fluctuates in each environment, the top contenders for priority often include high efficiency versus low risk of poor performance,” says VMWare.28 “The stress concept involves how high and how long the demand persists relative to the capacity available,” explains VMWare.28 It “uses this value to measure the potential for performance problems. The higher the stress score, the worse the potential is for degraded performance on your objects.”28

Efficiency and optimization are the goals for capacity planning. In an ITRS use case that looked at application demand modeling on a mobile banking platform, the finance client had the following three objectives:

  1. Predict maximum call volume on the current architecture. Identify degradation in performance as volumes increase.
  2. Recommend changes to improve capacity limits and reduce unused infrastructure.
  3. Allow modeling of increase in volumes and predict impact.

For this use case, which could easily be extrapolated to a marketing department, the infrastructure data was provided by vCentre to develop the baseline, with a total of 1420 Virtual Machines and 30 hosts. AppDynamics data was used to identify the servers associated with the mobile application and provided ‘In-App’ Data. The metrics were as follows:

  • Business volume metrics – ‘Calls Per Minute’.
  • Performance metrics – transaction response times.
  • In-app metrics – time per minute spent on garbage collection.
  • Tier/Role of each server.

The customer wanted a view of performance as well as resource utilization, which included the following standards:

  • Over 90% of transactions going through the system fell into one of the same four groups:
    • Login
    • Check Security Question Answer
    • Get Balance
    • Check Transaction History
  • It was established that the 95th percentile of transaction responses would be less than three seconds. Anything over that would be considered a drop in performance and noticed by end-users.
  • For the Jboss servers, specific attention was paid to the time spent doing garbage collection, i.e., the attempt to reclaim memory occupied by objects that are no longer in use by the program. The standard set: this should be less than 10% and the threshold for this metric was set at 6 seconds (10% of a minute).

In general, ITRS recommends expressing capacity in business terms by understanding volume constraints and trends. Do a deep dive into relationships, ITRS recommends:

  • Detect relationships between volume drivers and resource utilization.
  • Identify the strongest relationships that drive application constraints.
  • Track ongoing behavior and changes.
  • Create statistical summaries at multiple levels of the organization and build up an accurate understanding of application behavior and metric patterns.
  • Normalize data from the multiple tools and technologies a company has throughout its organization.

Insights from these activities should provide information about resource utilization, implementation metrics, business volumes, and critical transaction response times. The goal is to have a single pane of glass view of the company’s entire systems and all its processes.

BUSINESS

VALUE

Figure 3: Capacity Planner architecture showing how business value is derived

Source: ITRS

ITRS’s Capacity Planner solution (see Figure 3) attempts to understand the correlations between business and performance metrics. By understanding a systems’ relationships, ITRS’s Capacity Planner determines the primary constraints in the application, including ones for marketing.

The tier constraint chart (see Figure 4) shows the maximum volume that each component of the application can handle before running into capacity limit issues. VM 1001 is the Jboss server with the lowest capacity for volumes, causing garbage collection issues when calls per minute volume reach 4,314.

Figure 4: Tier constraints

Source: ITRS

Using these mined relationships, a ‘Demand Model Template’ is created to allow these to be used in a forward-thinking model. This allows the user to model an increase of up to 6,000 extra calls per minute (see Figure 5). At just over an extra 4,000 calls per minute, it is clear that the Jboss servers will breach their thresholds. A strong correlation between Calls Per Minute and Garbage Collection rates (0.97 r-squared) is discovered.

Figure 5: Scenario modeling

Source: ITRS

Figure 6 shows that, by the last operation, having added an extra 6,000 calls per minute, the issue becomes application-wide. In this case, it was predicted that 5% of login response time would be above 3 seconds.

Figure 6: Scenario Modeling

Source: ITRS

The recommendation is to drag and drop four new Jboss VMs and distribute the demand evenly. Green icons on the timeline above indicate that the operations would be successful ones (See Figure 7).

The results were as follows:

  • Using the application demand modeling process, the bank was able to clear constraints. Troubling bottlenecks were also identified.
  • No CPU or memory capacity issues resulted from the projected increased load.
  • Performance problems were detected, which led to an increase in response times.
  • The addition of four more VMs to the Jboss layer was expected to resolve the issue.
  • No new infrastructure was required.

For companies looking to implement a capacity management solution, ITRS recommends they do the following:

  1. Identify where the instance needs to run (location) and optimize for cost/performance.
  2. Identify the best way to buy the instance, which depends on how long it is going to run for. This is an aggregated need for that instance size, not the need for one specific instance for one application.
  3. Identify how long an instance should run for, and if it is idle, how long before it should be shut down.
  4. Continually analyze the billing engines of the cloud providers to identify optimal usage and policies.

Ideally, the capacity planning tool should be an active tool, so that some types of recommendations/policies can be actioned automatically. Finally, the tool should be able to compare multiple cloud providers and then optimize across the hybrid-cloud IT estate.

Additionally, ITRS recommends companies optimize their usage and continuously review the following:

  1. Optimize the cloud at the application level by correlating business demand with cloud service utilization.
  2. Plan for growth and predict upcoming costs with advanced predictive analytics and forward-thinking what-if scenario modeling.
  3. Improve business processes with service management integration.
  4. Manage across Hybrid-IT, on-prem, and multi-cloud in a single tool with consistent reporting regardless of the environment.

Figure 7 shows how right-sizing, i.e., “the process of matching instance types and sizes to your workload performance and capacity requirements at the lowest possible cost”[29], works. As AWS contends, right sizing is “also the process of looking at deployed instances and identifying opportunities to eliminate or downsize without compromising capacity or other requirements, which results in lower costs.”29 When moving an application currently running in a data center to the cloud, you can capture all the information on the servers needed before the actual move to the cloud.

Figure 7: Scenario Modeling

Source: ITRS

Figure 8 illustrates a data center running the ITRS Capacity Planner tool. Blue areas are under-utilized while orange areas are more heavily utilized. Capacity Planner shows the recommendations to improve the use of the data center servers.

 

Figure 8: On-Premise Recommendations

Source: ITRS

However, if the application is to be ported to the cloud as in Figure 9, it makes no sense to buy the extra capacity in the cloud, as it is easy to re-size the system if and when it is needed. So, the tool reveals the cost of moving ‘like for like’ in case it’s important to know what the baseline is. The cost of moving the application right-sized into the cloud is also shown. In this case, a savings of 28% will be attained on a 1-year plan.

 

Figure 9: Migration from On-Premises workload to the Cloud

Source: ITRS

Most cloud cost optimization tools work at the total cloud spend level. They only optimize the entire cloud estate. For some larger companies and their application teams, this is too broad an undertaking to commit to. ITRS Capacity Planner, however, works at the single instance level, so separate modeling and application optimization can be achieved.

If an on-premises workload is to be moved to the cloud, Capacity Planner prices up the ‘Like for Like’ migration and the rightsized estate, identifying cost savings as per Figure 10.

Figure 10: Adding Application-level visibility

Source: ITRS

Applications can be analyzed and optimized individually or in the aggregate. Certain applications may have different resource usage requirements and therefore are not suitable for aggregation. In this example, high memory and a fast disk are necessities.

Figure 11: Configuring Recommendation Rules

Source: ITRS

The user can define the ‘Recommendation Rules’ (see Figure 11), which drive the optimization and migration of specific application needs. In this case, the memory and fast disks needed will be provided. So, when planning the migration of this application, the cloud provisioning team will be informed that ‘memory-optimized’ instances are required, which will provide ‘burstable’ dynamic memory and Solid-State Disks, or SSD, for the fastest disk access possible.

 

Figure 12: Cloud Instance Recommendations

Source: ITRS

Right-size options are presented visually (see Figure 12), configurations are matched to the closest instance sizes based on statistical profiling of demand. In this example, a client had bought an Oracle Linux 4/5 server, but when analyzed the usage of CPU and memory, an instance that better matched with the statistical profiling of the demand was recommended.

The next stage is right-buying. Once the instances needed to run the application are known, Capacity Planner looks at how long and where the instances will run.

Figure 13: Timeburst view

Source: ITRS

Figure 13 shows an IT estate that had been left running constantly. The blue areas of the estate are underutilized or idle system that have not been halted. Running instances when there are no workloads is one of the biggest causes of wastage in cloud spend. Most of the time, many of these unused instances can – and should – be shut down.

Figure 14: Timeburst view with most workloads very short-lived with minimal idle time

Source: ITRS

Figure 14 shows the after snapshot. It’s the same estate with the Capacity Planning right-buying and policy management in place. The black area reveal instances that have been automatically shut down and then spun back up as and when needed. This activity resulted in a 75% AWS saving. It will also make the entire system much more productive.

When working with clients, ITRS recommends the following to implement their capacity planning solution:

  • Right Size:
    • Highly granular data capture of all resource usage (CPU, mem, disk, network). Identify the sizes of instances needed for each application workload.
    • Determine optimum configuration of burstable or non-burstable instances.
    • Identify idle times and workload periodicity.
    • Understand the capacity of Hybrid-IT. Repatriate work from the cloud if on-premises capacity allows it.
  • Right Buy:
    • Identify where the instance needs to run (location) and optimize for cost/performance.
    • Identify the best way to buy the instance, which depends on how long it will run.
    • Identify how long an instance should run for, and if it is idle, how long before it should be shut down.
    • Continually analyze the billing engines of the cloud providers to identify optimal usage and policies.
  • Optimize:
    • Right-Size at the application level by correlating business demand with cloud service utilization.
    • Plan for growth and predict upcoming costs with advanced predictive analytics and Forward-Thinking ‘what-if’ scenario modeling.
    • Improve business processes with Service Management integration.
    • Manage across Hybrid-IT, on-prem, and multi-cloud in a single tool with consistent reporting regardless of the environment.

Conclusion

I Lee’s argues, “Capacity planning is a challenging task when there is an unpredictable, fluctuating computing demand with many peaks and troughs.”13 This is mitigated by the fact that not only are there so many ways to track capacity data these days, but analytics can produce incredibly accurate evaluation models that can help marketers quickly go ROI-positive on their capacity planning and marketing initiatives.

Capacity planning works hand-in-hand with Kalb’s condition that every dollar spent on marketing be valued and quantified.3 It attempts to ensure there is as little wastage as possible when it comes to the IT estate, including marketing’s IT activities. The question is, can marketing’s IT activities be quantified so that every sale, lead, and marketing offer can be traced back to the marketing effort that produced it? This a lofty goal but IT is getting there.

ITRS recommends companies optimize their usage and continuously review their cloud usage at the application level by correlating business demand with cloud service utilization. Amazon became one of the most valuable companies in the world on the back of the profits it made on its AWS service not its marketplace, where margins are slim. This should be a warning to anyone planning to move their IT services into the cloud. Although it might make a lot of budgetary sense, there’s no reason to pay for servers that don’t need to be running.

Companies should plan for growth and predict upcoming costs with advanced predictive analytics and forward-thinking what-if scenario modeling, says ITRS. COVID came out of nowhere and threw a wrench into a lot of business forecasting models but that was a unique situation, and it spurred a work-for-home trend that put great strains on IT departments the world over. However, it did show the value the cloud and ITRS’s recommendation that companies manage their hybrid-IT, on-prem, and multi-cloud platforms through a single tool will provide consistent reporting regardless of the environment. This should help businesses contain costs.

Forty years ago, John Naisbitt warned us “We are drowning in information but starved for knowledge.”4 Although some things have certainly changed for the better, we are still overwhelmed by data today. However, we can capture, track, and utilize this data better than we could just a few short years ago. Today, there is an overabundance of products in the market that allow the kind of tracking that becomes the baseline for change. With the IoT revolution about to hit with massive new amounts of collected data coming online, businesses need to understand that the cloud can be a good place to turn to for help, but if it is not tightly controlled, costs can quickly spiral out of control.

Rather than waiting for a dream solution to materialize, marketers need to find a system that enables them to make better decisions and they need to support those decisions with verifiable data. Wanamaker’s famous lament that he didn’t know which half of his marketing spend was worth it and which wasn’t doesn’t ring true anymore. Today, the dominant advertising, social media, CRM, and marketing software companies are focused on marketing attribution, trying to help companies quantify every dollar they spend on advertising. That’s an important part of the equation but it’s just as important to understand the costs to attain that attribution. “Doing business without advertising is like winking at a girl in the dark. You know what you are doing, but nobody else does,” says Steuart Henderson Britt, a consumer behavior specialist. This quip amusingly also exemplifies what goes on behind the scenes with capacity planning, real-time monitoring, AIOps, cloud bursting, and predictive resource scaling. It’s a lot and it’s incredibly valuable, something marketing IT departments should team up to address. Few people realize that a strong operations backbone can be the impetus that helps marketing create the wink that just might tease the client enough to close the sale.

[1] DuBois, John. Chron.com. What Is a Constraint in Marketing? https://smallbusiness.chron.com/constraint-marketing-65978.html (Accessed 14 December 2020).

[2] Goldratt, Eliyahu M., 1947-2011. The Goal: a Process of Ongoing Improvement. Great Barrington, MA: North River Press, 2004.

[3] Kalb, Ira. 8 Steps To Creating An Effective Marketing. Business Insider. Information System, Marshall School of Business. 22 November 2013. 8 Steps To Creating An Effective Marketing Information System – Business Insider (Accessed 13 December 2020).

[4] Naisbitt, John. Megatrends: Ten New Directions Transforming Our Lives. (1982). Warner Books, Inc.; 1st edition, October 27, 1982.

[5] https://info.flexera.com/SLO-CM-REPORT-State-of-the-Cloud-2020 (Accessed 6 November 2020).

[6] Kotler, P., (1988) Marketing Management: Analysis Planning and Control, Prentice-Hall p. 102.

[7] Sebestyén, Zoltán and Juhász, Viktor. (2003). The Impact Of The Cost Of Unused Capacity On Production Planning Of Flexible Manufacturing Systems. Department of Industrial Management and Business Economics Budapest University of Technology and Economics H–1521 Budapest, Hungary. October 20, 2003. https://www.researchgate.net/publication/254407614_The_impact_of_the_cost_of_unused_capacity_on_production_planning_of_flexible_manufacturing_systems (Accessed 31 October 2020).

[8] Brynjolfsson, Erik. (2003). ROI Valuation, The IT Productivity GAP.  https://www.academia.edu/2662751/ROI_Valuation_The_IT_Productivity_GAP (Accessed 5 November 2020).

[9] https://pwemag.co.uk/news/fullstory.php/aid/1764/10_rules_for_condition_monitoring.html (Accessed 5 November 2020).

[10] Lerner, Andrew. AIOps Platforms. Gartner.com. August 09, 2017. https://blogs.gartner.com/andrew-lerner/2017/08/09/aiops-platforms/ (Accessed 14 December 2020).

[11] Fitz, Trent. The State of AIOps: Understanding the Difference Between Tools. VMBlog. October 01, 2019. https://vmblog.com/archive/2019/10/01/the-state-of-aiops-understanding-the-difference-between-tools.aspx#.X9cCz9gzY2w (Accessed 14 December 2020).

[12] Soneja, Atul. CIO.com. How AIOps is already transforming IT. March 09, 2020. https://cio.economictimes.indiatimes.com/news/next-gen-technologies/how-aiops-is-already-transforming-it/74544705 (Accessed 14 December 2020).

[13] Lee I (2017) Determining an optimal mix of hybrid cloud computing for enterprises. UCC ‘17 companion. In: Proceedings of the10th international conference on utility and cloud computing, pp 53–58.

[14] Araujo J, Maciel P, Andrade E, Callou G, Alves V, Cunha P (2018) Decision making in cloud environments: an approach based on multiple-criteria decision analysis and stochastic models. J Cloud Comput Adv Syst Appl 7:7. https://doi.org/10.1186/s13677-018-0106-7

[15] López-Pires F, Barán B (2017) Cloud computing resource allocation taxonomies. Int J Cloud Computing 6(3):238–264

[16] Balaji M, Kumar A, Rao SVRK (2018) Predictive cloud resource management framework for enterprise workloads. J King Saud Univ Comput Inf Sci 30(3):404–415

[17] Wang CF, Hung WY, Yang CS (2014) A prediction based energy conserving resources allocation scheme for cloud computing. In: 2014 IEEE international conference on granular computing (GrC), Noboribetsu, Japan, pp 320–324.

[18] Han Y, Chan J, Leckie C (2013) Analysing virtual machine usage in cloud computing. In: 2013 IEEE ninth world congress on services, Santa Clara, CA, USA, pp 370–377.

[19] Deng D, Lu Z, Fang W, Wu J (2013) CloudStreamMedia: a cloud assistant global video on demand leasing scheme. In: 2013 IEEE international conference on services computing, Santa Clara, CA, USA, pp 486–493.

[20] Laatikainen G, Mazhelis O, Tyrvainen P (2016) Cost benefits of flexible hybrid cloud storage: mitigating volume variation with shorter acquisition cycle. J Syst Softw 122:180–201

[21] Guo T, Sharma U, Shenoy P, Wood T, Sahu S (2014) Cost-aware cloud bursting for enterprise applications. ACM Trans Internet Technol (TOIT) 13(3):10 22 pages

[22] Clemente-Castelló FJ, Mayo R, Fernández JC (2017) Cost model and analysis of iterative MapReduce applications for hybrid cloud bursting. In: CCGrid ‘17 Proceedings of the 17th IEEE/ACM international symposium on cluster, cloud and grid computing, Madrid, Spain, pp 858–864

[23] Weinman J (2016) Hybrid cloud economics. IEEE Cloud Comput 3(1):18–22

[24] Toosi AN, Sinnott RO, Buyya R (2018) Resource provisioning for data-intensive applications with deadline constraints on hybrid clouds using Aneka. Future Gener Comput Syst 79(Part 2):765–775

[25] Edmonds A, Metsch T, Papaspyrou A, Richardson A (2012) Toward an open cloud standard. IEEE Internet Comput 16(4):15–25.

[26] Forrester (2018) Predictions 2019: Cloud computing comes of age as the foundation for enterprise digital transformation. Available from: https://go.forrester.com/blogs/predictions-2019-cloud-computing/.

[27] https://blogs.3ds.com/delmia/uncovering-hidden-lessons-goal-part-1/ (Accessed 30 December 2020).

[28] https://docs.vmware.com/en/vRealize-Operations-Manager/6.6/com.vmware.vcom.core.doc/GUID-AEB32BB2-7828-4664-A81A-5E7E3CF38620.html (Accessed 5 November 2020)

[29][29] https://aws.amazon.com/aws-cost-management/aws-cost-optimization/right-sizing/ (Accessed 5 November 2020).

Categories:

Tags:

Share:

Facebook
Twitter
Pinterest
LinkedIn

4 responses on "Capacity Planning in Marketing"

  1. There are no figures in the article even when the article makes references to several figures. Consequently, I could not follow this piece. Could we have an article with figures posted please?

  2. My bad Sudhir….I uploaded the article wrong. I will send you a copy of the original and repair this one. Sorry for the incconveniance.

  3. But tamoxifen causes some rare but serious side effects It acts like an estrogen in the uterus and bloodstream, thus increasing users risk of getting uterine cancer or a life threatening blood clot how long does it take for propecia to work

Leave a Message

Related Posts

Connect with us

Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.

On Key

Latest Posts

The Automation of Marketing

Abstract Today, the average campaign response rate is less than 1 percent and with the emergence of artificial intelligence (AI),

Do the Right Thing

In July of 1588 a large Spanish fleet edged Eastward up the English Channel, followed, just beyond long cannon shot,

Copyright ©️ Qasiknow.com | All rights reserved.