<img src="http://www.eventcapture06.com/57643.png" style="display:none;">

Technology News and Trends

Four Tips to help you prepare for Cyber Monday and the New Year Sales

Posted by Rob Quickenden

01-Dec-2014 09:29:28

Avoid Network Operations Center Finger Pointing

You need to get crucial monitoring and performance data to the right teams to avoid finger pointing when things slow down - let's face it, the last thing you really want to do when traffic is high is to have to invoke traffic calming to queue people to your website. Enterprise IT staff typically have daily, war room meetings to review and discuss network and application performance.

Those who pass the finger-pointing test get to go home while others have to stay and figure out what went wrong with the network, the servers, and the applications.

Here are four tips to help avoid that finger pointing at critical times.

 


 

1) Keep IT Simple. You don't have to be in Media, eCommerce or retail to know that solving IT problems quickly can make a huge difference in the overall performance of your organiszations IT and client / customer facing business applications. The simpler your network and application monitoring tools and alerting systems, the easier it is for you to determine where the problem is, why its occurring and what needs to be fixed!

Sounds simple right? But when you have complicated, non-shared tools for network performance management (NPM) and application performance management (APM), IT and network teams typically get hampered by any attempts to problem solve together as a cohesive team. Whilst in the the past, that did not matter as much because fixing the network usually solved most end user experience challenges. That's not the case now - i had just last week a client who'd been trying to fix some intermittent user issues for 3 weeks!!

Within today's Hybrid Enterprise and with so many components affecting application performance, it's not always the network that is at fault.... it could be an application, part of an application, a database server or your Internet pipe! Knowing and fixing quickly is key to keeping the business running smoothly!

 

2) Manage from the Top Down not Bottom Up - the critical ingredient to the success of any business is to have the customer or end-user experience of business critical applications at the top of your priority list!  With today's hybrid enterprise and with applications and services spread across so many systems with many inter-dependencies, you need a performance engineering team that can manage and work across a number of different teams. Without this joined up approach, everyone will be working in their own silo and with no accountability for the overall success of your application delivery

 

3) Ensure you have an integrated Cross-Architecture performance dash board - lmost all troubleshooting begins in the NOC. As a network operator, you get a quick view of whether you have an application problem or a site problem with a dashboards like this:

APM and NPM

This dashboard in the NOC gets you started. Where you go from here is up to the data and must be easily shared across teams to be effective when troubleshooting and fixing and issue.The dashboard above is from Riverbed's SteelCentral.

 

4) Remember - Better Together! - With a comprehensive set of passive and active NPM and APM monitoring tools such as those from Riverbed's complete Application Performance Platform™, you have a much better chance of solving application performance issues across teams and much faster.

Another advantage of Riverbed's set of solutions is a continued commitment to "Better Together" solutions. For example, AppResponse 9.5 now integrates with SteelHead 9.0 for visibiility and troubleshooting WAN-optimized web applications on premise or by SaaS.

SteelCentral NetProfiler provides deep packet inspection (DPI) data of the specific ports, protocols, and applications running in your branch offices from another together-is-better solution by integrating with SteelHead.


Cisilion are proud to Riverbed Premier Plus Partners and Riverbed Authorized Support Partners. Contact us for more information, to book a demo or explore a Proof of Concept.

Read More

Topics: Technology News, Application Centric Infrastructure, Solving Business Challenges, Riverbed, e-commerce

SpotLight On: Load Balancing and Application Delivery Controllers

Posted by Rob Quickenden

17-Sep-2014 15:22:00

Yesterday, we ran an event at the Duck and Waffle Restaurant (40th Floor Heron Tower) on Optimising the Digital Experience with Riverbed's Performance Management platform and in particular SteelApp.

We made a few un-due "assumptions" that all of our guests know what Load Balancing and "Application Delivery Controllers" are and how they work. We thought we would give a brief overview into each.

Acronyms and Terminology

GLBs, ADCs, Traffic Managers, Reverse Proxy's, Application Proxy's. These words/phrases were used a lot today in different contexts. We thought it was worth clarifying exactly what these “ADCs” and “Traffic Managers” actually do, and how they differ from plain and simple “Load Balancers”or DNS round-robin processes.

So – what is a “Traffic Manager?”

A Traffic Manager performs similar functions for a website (or web service), that a call flow management system does in a call centre. Think about a customer service representative in a small business for example. Their direct-dial number is published in the phone book or online and they handle in bound and out-bound customer queries, ranging from account queries, technical support questions, through to escalations and complaints.

This is where the trouble starts.

As this "small" company begins to become more successful, the volume of calls increases and customer service levels begin to decrease as calls are left unanswered or dealt with in-correctly. For example, the phone line may be engaged, or the customer service rep may be away from his desk, so calls get missed. In this scenario, the customer service rep is also busy dealing with all kinds of “unwanted calls” - : sales call, wrong numbers, spam calling from other company’s for example as well as personal calls.

What the company needs is a way to control how phone calls are routed to employees to ensure that their customers are services by the right people in the fastest time possible.

So what’s the solution?

In the case of the example above, many businesses implement a call centre technology. A Call Centre is a great way to solve the problem as a company can sit a number of call centre “operators” on a call management system, to balance the phone calls across the members of staff, to route particular calls to particular departments and most importantly, to give them more control - such as stopping calls from certain locations, screening out nuisance calls, and in some cases, to even respond directly to customer inquiries. Above all it’s all about improving the customer experience…

How does this work for my applications then?

In a very similar way actually.

In just the same way as the example above, businesses may have an online application (let’s call it an e-commerce website) that may be hosted with a single web server with a public IP address. As the business grows they quickly progress to building a farm of web servers which may be hosted on-premise in the cloud on in a hybrid combination of the two.

To ensure that the applications are delivered in a timely, secure and most efficient way, businesses choose to deploy “Traffic Management” in front of these web applications. These traffic management systems (often referred to as “Load Balancers” or “Proxy’s”) are application delivery controllers. Their job is to manage the delivery of the critical applications and services that the business publishes. The effectiveness of these ADCs can be measured by one simple metric – the degree of control that the application delivery controller gives you in delivery the application to the use in the fastest and most personal way.

 

Ok, I got it! So where and how do I deploy them?

Virtualization and Cloud have had a significant impact on the architecture of applications and servers running in a business – this deployment model applies equally to ADCs.

  • The old architecture was monolithic - with all elements of an application deployed within a physical data center and physical ADCs in close proximity to manage the traffic. It was a static environment, with long lead times required to change or upgrade the overall application deployment.
  • The new architecture is more flexible, which follows the advantages of virtualisation. With elements of an application spread across a range of IT environments (and often in different locations) we can now place an ADC around each part of the application meaning we can ensure application availability whatever the demand without buying expensive static and “future proofed” hardware meaning we can not only better control costs, but we can support an environment that is more dynamic and distributed in nature.

 

And an example of how Riverbed SteelApp does this is?

Yesterday, Steve Mavin (from Riverbed) spoke about ‘See Tickets’. See Tickets is one of the largest online ticket resellers in Europe, with over 34 million online unit sales per year.  The IT team at See Tickets was becoming increasingly aware that its website struggled to cope with huge and sudden spikes in web traffic – meaning they were missing sales! With 85 percent of ticket sales made online, and over 1 million page views per day, See Tickets is heavily reliant on the performance of its website infrastructure.

When See Tickets launches a new event, thousands of people will hit their website at the same time, and there is a real danger that the web site will collapse. SteelApp steps in to act as a shock absorber, and manages the incoming requests to protect the application, and speed up the response time.

Riverbed SteelApp was chosen for its flexibility and reporting metrics to monitor and review website traffic patterns, identify trends, and make informed business decisions. They use SteelApp to apply a series of application business policies to control the type, flow, and priority of traffic to their website. This enables See Tickets to manage traffic to the back-end servers to cope with the huge peaks of visitors coming to the site. 

In addition, SteelApp Web App Firewall is a key part of the solution, providing an additional layer of security, giving See Tickets optimum protection for their online presence. See Tickets’ customers have the reassurance that their personal and financial information is protected, as they comply with the Payment Card Industry Data Security Standard (PCI DSS).

Online ticket sales and similar high-volume transactions are a great example of the way in which SteelApp can make a real difference to a web application: put simply, shorter response time means they make more money, and downtime means losing money.

Anything else?

Riverbed SteelApp, is the #1 virtual application delivery controller (ADC) for scalable, secure, and elastic delivery of your enterprise, cloud, and e-commerce applications.

We'd love to hear from you to talk through your application and business performance needs. Please get in touch @Cisilion or contact me directly @rquickenden 

 

 

Read More

Topics: Technology News, Application Centric Infrastructure, Solving Business Challenges, Riverbed

Gain an advantage with Location Independent Computing

Posted by Rob Quickenden

27-Jun-2014 17:17:55

In an ideal world, all of your applications and services would be delivered centrally from Public or Private Clouds and accessible anywhere, from any location, securely of course.

Whilst most businesses are generally virtualising and consolidating as many distributued applications and data to their Data Centre or Cloud platforms, for many reasons this has not been possible for every business or branch application due to bandwidth constraints, political reasons or because some applications just don’t work well across high latency wide area connections.

As such truly consolidating server, storage and application infrastructure where remote offices are concerned needs to be addressed carefully – Why? Because data centres house the compute, data and applications (and have no users), whilst branch/regional offices are usually full of people, all trying to access these applications and data over typically slow or at best heaviliy congested WAN connections.

This basic distinction has important consequences on the technology required in branch/regional  offices.

Why? Because it’s hard to modernize and centralise branch office IT without really understanding and talking into account the user performance.

What’s the Problem then?

The issues of high latency connections, application sprawl, I/O intensive applications and “concerns” over WAN failure have led to businesses buying cheaper variants of Private Cloud infrastructure (aka local bits of cheap kit)  into branches or accepting poor performance. Neither of these is ideal – local branch infrastructure means management overhead, local back-up issues, compliance issues and security concerns.

According to IDC, 50% of global companies still have more than 50% of their data still in their branches.

How do we help our customers?

At Cisilion, we help our customers to understand the cause of these problems and more importantly how they can overcome these issues throughthe use of by providing fast, fluid and secure branch office solutions with no performance compromise whilst still providing centralised IT with no or little need for local IT staff or expensive equipment.

With more than 13 years’ experience in compute, storage and networking topologies we provide design, professional services, support and management that enable:

  • WAN optimisation & acceleration: With customers moving as many applications as possible to the Public and Private Cloud, business need to ensure their users can still access those applications and data as if they were local without the lag and delay often experienced by Wide Area Networks.

  • Elimination of all Branch Office Storage: Accepting that some services just don’t perform across even the best optimised WAN meaning servers and storage locally at branch, IT doesn’t want to be running backups at every branch (or across the already creaking WAN). The business needs to offer the same RPO and RTO at branches as they do for centralised services,  and we can help customer have data presented locally but be managed, secured and back-up centrally as if it were central.  This means local, WAN resilient access to applications that is stored centrally
  • Remote administration and support: IT can save a lot of pain (and travel costs) if by managing their entire IT (servers, storage and compute) as if were local with the need for complex and expensive management tools and replication technology
  • Quality of Service and Application Delivery: Remote office users use a wide range of applications, from mission critical applications such as CRM and ERP, to latency-sensitive voice, and recreational Internet traffic. IT needs to ensure that they can intelligently identify and control this traffic and ensure predictable and guaranteed performance of critical and latency-sensitive apps.

  • Bandwidth Consolidation and Path selection: With the days of a simple, single links into branch offices disappearing, branches often have a mix of MPLS and Internet broadband connections. Rather than just accept these can provide back-up and resilience, business want to ensure the right applications traverse the right links, with less important traffic going over contented lower bandwidth links and mission critical traffic going across expensive MPLS for example.

Traditionally, businesses have been simply throwing more money into expensive WAN links or simply “sticking stuff locally”. Whilst generally you can now get a lot more MB for your £, Internet connections are still typically latency heavy and never perform as well as the local LAN in terms of speed and reliability.

In our experience, IT needs to care about what causes the bottle necks and performance problems but the Business, its users and its customer simply care about Application Performance and what the User Experience is like. I spend a lot of my time working with business leaders helping them to achieve Location-independent Computing enabling them to deliver business applications to any user in any location without the need to deploy and manage silos of IT and storage at branches and remote offices.

As always we are interested in your views on this - please get in touch with us and follow what we do @Cisilion or at sales@cisilion.com

www.cisilion.com

 

Read More

Topics: Application Centric Infrastructure