11 Dec

Horizontal and Vertical Scaling

Scalability is the capability of a system to expand from either existing configurations to handle increasing amount of load or by adding an extra hardware. There are mainly two types of scalabilities, horizontal and vertical. Horizontal scaling, is adding more servers to your application to spread the load. The simplest case of horizontal scaling may be to move your database onto a separate machine from your web server. Where as Vertical scaling is to add more RAM, more processors, more bandwidth, or more storage to your machine.

Horizontal scaling can only be applied to applications built in layers that can run on separate machines. Horizontal scaling applies really well to on-demand cloud server architectures, such as Amazon’s EC2 hosting platform. Horizontal scaling can also facilitate redundancy – having each layer running on multiple servers means that if any single machine fails, your application keeps running. While Vertical scaling can be a quick and easy way to get your application’s level of service back up to standard. On the negative side, vertical scaling will only get you so far. Upgrading a single server beyond a certain level can become very expensive, and often involves downtime and comes with an upper limit.

So what scaling strategy best suit your needs? It all comes down to the application it must be implemented on. There are many applications that can only scale vertically. They can only be run on a single server. You have a clear strategy choice with these applications! But, a well written application should scale horizontally very easily. An application that is designed to scale horizontally can also be scaled vertically. So your still left with an open choice. You can either weigh up the cost of vertically scaling with an extra bit of RAM vs horizontally adding a new server in your cluster.

Share this
09 Dec

High Availability and Disaster Recovery

The terms High Availability (HA) and Disaster Recovery (DR) are frequently heard and are used interchangeably. Which is why it is important to clarify the distinctions between the two in order for companies to understand the unique capabilities, the role of each and how they can be used most effectively within their organization. High Availability is a technology design that minimizes IT disruptions by providing IT continuity through redundant or fault-tolerant components. While Disaster Recovery is a pre-planned approach for re-establishing IT functions and their supporting components at an alternate facility when normal repair activities cannot recover them in a reasonable time-frame.

Can Disaster Recovery Include High Availability? Disaster recovery can, and it often does, include high availability in the technology design. This configuration often takes the form of implementing highly available clustered servers for an application within a production data center and having backup hardware in the recovery data center. Data from the production server is backed up or replicated to the recovery data center; systems are both protected from component failures at the production data center and can be recovered during a disaster at the recovery data center.

You may also come around end users chatting about adding a “business continuity disaster recovery” solution when they really intend to make a service highly available. More often than not elements of both high availability and disaster recovery are blended in the discussion. If you’re in the role of a service provider listening to requirements and asking for clarification regarding these questions it will help by identifying if geographic diversity is needed and how much down time can be tolerated before a system is restored. Through this you’ll know what to look forward to, and be set on the right path.

Share this
07 Dec

Web Server Farming: how do they work?

A server farm also referred to as server cluster, computer farm or ranch, is a group of computers acting as servers and housed together in a single location. A server farm works by streamlining internal processes by distributing the workload between the individual components of the farm and through which it expedites computing processes by harnessing the power of multiple servers. A Web server farm can be either , a website that has more than one server, or an ISP  (Internet service provider) that provides web hosting services using multiple servers.

The farms rely on load-balancing software’s that can accomplish such tasks as tracking demand for processing power from different machines, prioritizing the tasks and scheduling and rescheduling them depending on priority and demand that users put on the network. When one server in the farm fails, another can step- in as a backup. Such as in a business network, a server farm or cluster might perform such services as providing centralized access control, file access, printer sharing, and backup for workstation users. The servers may have individual operating systems or a shared operating system and may also be set up to provide load balancing when there are many server requests. As for on the Internet, a Web server farm, or simply Web farm, may refer to a Website that uses two or more servers to handle user requests. Typically, serving user requests for the files (pages) of a Web site can be handled by a single server. However, larger websites may require multiple servers.

This is why combining servers and processing power into a single entity has become and have been relatively common for many years in research and academic institutions. As of Today, more and more companies are utilizing server farms as a way of handling the enormous amount of computerization of tasks and services that they require through Web Server Farming.

Share this
04 Dec

Amazon Internet of Things: The world of Cloud Computing

Internet of things, the word may seem to describe itself. IoT (Internet of Things) is a network of physical Objects or ‘Things’ embedded with Software’s, electronics, network connectivity and sensors. It plays a role of collecting and exchanging data, allowing objects to be sensed and controlled remotely across existing network infrastructures. Therefore creating more opportunities for direct integration between the computer-based system and the physical world, and also resulting in economic benefit, improved efficiency and accuracy. In recent news, Amazon.com has set aims on parlaying its cloud computing dominance into a big stake in the world of IoT (Internet of Things) . The e-commerce giant launched ‘AWS IoT’ this October, a new cloud computing service in its Amazon Web Services division.

Amazon’s IoT, allows customers to build their own cloud apps to remotely control machinery, supply chains and track inventory, and even handle thousands of other tasks. In a way Amazon is playing catch up with some of the already formed cloud players, as well as hosting smaller start-ups that have been offering cloud services tied up to development platforms for many years, a portion of these small players such as Electric Imp, Particle (formerly Spark Labs), Ayla Networks have been hosting their cloud offerings on AWS. If Amazon keeps their game up, they might find themselves competing with a key partner.

Share this
12 Nov

Network Latency: Issues we all may come across

Network latency is the term used to indicate any kind of delay that happens in data communication over a network. Network connections in which small delays occur are called low-latency networks whereas network connections which suffers from long delays are called high-latency networks. High latency creates bottlenecks in any network communication. It prevents the data from taking full advantage of the network pipe and effectively decreases the communication bandwidth. The impact of network latency on bandwidth can be temporary or persistent based on the source of the delays.

Both bandwidth and latency depend on more than your Internet connection – they are affected by your network hardware, the remote server’s location and connection, and the Internet routers between your computer and the server. The packets of data don’t travel through routers instantly. Each router a packet travels through introduces a delay of a few milliseconds, which can add up if the packet has to travel through more than a few routers to reach the other side of the world. However, some types of connections – like satellite Internet connections – have high latency even in the best conditions. It generally takes between 500 and 700ms for a packet to reach an Internet service provider over a satellite Internet connection.

Latency isn’t just a problem for satellite Internet connections, however. You can probably browse a website hosted on another continent without noticing latency very much, but if you are in California and browsing a website with servers only located in Europe, the latency may be more noticable. There is no doubt to it that latency is always with us; it’s just a matter of how significant it is. At low latencies, data should transfer almost instantaneously and we shouldn’t be able to notice a delay. As latencies increase, we begin to notice more of the delay.

Share this
10 Nov

What is Virtualization?

When you hear people talk about virtualization, they’re usually referring to server virtualization, which means a combination of software and hardware engineering that creates Virtual Machines (VMs) – an abstraction of the computer hardware that allows a single machine to act as if it were many machines. Each virtual machine can interact independently with other devices, applications, data and users as though it were a separate physical resource/unit.

Why is Virtualization used? Virtualization is being used by a growing number of organizations to reduce power consumption and air conditioning needs and trim the building space and land requirements that have always been associated with server farm growth. Virtualization also provides high availability for critical applications, and streamlines application deployment and migrations. Virtualization can also simplify IT operations and allow IT organizations to respond faster to changing business demands

Virtualization may not be a magic bullet for everything. While many solutions are great candidates for running virtually, applications that need a lot of memory, processing power or input/output may be best left on a dedicated server. For all of the upsides of virtualization isn’t magic, it can introduce some new challenges  as well for firms to have to face. But in most cases the pro’s of cost and efficiency advantages will outweigh most if not all the cons, and virtualization will continue to grow and gain popularity in today’s world.

Share this

© 2014 Alaska Hosting. All rights reserved.

Powered by themekiller.com anime4online.com animextoon.com apk4phone.com tengag.com moviekillers.com