Sales: 678.967.3854
Support: 866.252.6363

[featured_image]

 

By the DynaSis Team

With more than half of small and medium sized businesses (SMBs) backing up their data at least weekly, and 15% backing up every day, per a 2014 study, SMBs have made big strides in the use of backup technologies over the past few decades. Despite this fact, many firms are still not taking adequate steps to ensure the data in those backups can meet the operating needs of the business.

Physically backing up is only a small piece of the puzzle. Where and how the data is stored and the level of data availability (whether employees can access and restore it quickly and easily, when needed) are just as important.

There are many additional considerations for a robust backup management solution, and we will delve into the individual aspects of backup―from data security and mobility to retention and deduplication―in later blogs. Today, we’d like to offer a three foundational principles that can help companies better protect their data and achieve more useful backups.

Eliminate Tapes, Forever: Tape backups served millions of companies well for decades, but today there is a much better solution. On-site tape storage is especially risky―and potentially worthless in the event of a physical disaster at your location. Furthermore, tapes deteriorate over time. Cloud-based backups, which almost always offer redundancy, are a far safer means of protecting your data.

Establish Data Access Thresholds: In the backup industry, there are three key metrics for companies to address: Recovery Time Objective (RTO; the minimum time within which you would like to restore your data, applications and critical IT-related processes after an outage), Recovery Point Objective (RPO; the point to which you want your data restored) and Maximum Tolerable Outage (MTO; the longest amount of time your business could be disrupted by loss of access to your data, email and applications before it jeopardized your business continuity and/or client relationships).

Although these metrics are often discussed in relation to extreme situations, they help companies determine their risk tolerance during data outages or losses of any type. DynaSis wrote a white paper that talks about these metrics and other disaster recovery issues. You can read it here.

Protect Your Bottom Line Along with Your Data: A core feature of any backup plan must be adequate protection for the data. However, it’s just as important for these solutions to protect the business and its employees. Numerous studies have determined that excess complexity or unavoidable data loss hurts everyone. One survey of SMB IT pros found that 33 percent said even a small data loss hurts corporate bottom lines, and 32% said it results in missed business opportunities.

Furthermore, data loss impacts office morale (24 percent of respondents), employee work-life balance (25 percent of respondents) and employee loyalty (11 percent had employees quit as a result of a data loss). The surest way to minimize these impacts is to have backups that allow selective retrieval rather than restoration of entire data stores. When employees can cherry-pick a single lost file rather than go through a tortuous process to retrieve a backup, they are happier, more productive and more successful at keeping your business operating at its peak.

Here at DynaSis, we have been providing best-practices data backup services―with optional selective restore capabilities―for many years. Our newest platform, the DynaSis Data Vault, is truly groundbreaking. To learn more or get started, please give us a call.

[featured_image]

By the DynaSis Team

Savvy business owners recognize that newer technology is faster and more efficient than outdated PCs, cellphones, and other hardware and devices. Nevertheless, for cash-strapped businesses seeking to maximize IT budgets, the question then becomes, is modern technology so much of an improvement that it is worth the expense? Short of replacing hardware when it dies, how can you calculate the value of replacing old hardware and devices with modern ones?

We’ll talk about that in a minute, but first, we wanted to share an interesting fact. During an interview with the Wall Street Journal, Bob O’Donnell, program vice president for research firm IDC, noted that workers waste up to three days, per year, waiting for older devices to boot up or load web pages.

This is a metric that is easy for anyone to calculate. For example, if you are paying an employee $52,000 a year, plus benefits and paid time off (traditionally considered to add one-third to the labor cost), your labor expense is nearly $70,000 a year. That equates to roughly $266 per day.

If that person is working on an older PC, you are wasting as much as $798 a year in the time he or she waits for an outdated PC to perform these processing operations, compared to the time it would take on a new machine.

That factoid surprised us, and it exemplifies how the hidden costs of using outdated technology really add up. Of course, faced with small budgets and daily pressures, it is still easy for business owners to put upgrades off.

Allocating IT budgets effectively requires balancing between end-user costs, direct technology costs and the value of turning productivity losses into gains. That’s why we recommend that organizations not make upgrade decisions without considering the business use case and support model—for the duration of the lifecycle—of every system they currently own and will someday replace.

When an organization comes to us, we also urge its decision makers to order a DynaSis IT Assessment before making upgrade decisions. This non-invasive exploration maps and evaluates the company network, hardware, devices and other IT elements and returns suggestions for urgent and recommended improvements. Sometimes, the worst performers in the company—the machines that are destroying productivity or even causing damage to corporate operations or public reputation—aren’t obvious to the untrained eye.

When evaluating the benefits of upgrading to modern technology, companies should also consider the negative impacts of machine/device age, including performance (not only processing speed but also ability to run modern software), availability (downtime concerns) and security. Other considerations include usage (why and how employees interact with the system for work purposes) and mobility (whether or not the current hardware supports remote productivity).

By understanding all these factors, firms are in a much better position to recognize the single most important distinction in developing ROI—the difference between cost and value. Armed with that information, they can establish refresh cycles that not only eliminate the three days of wasted time we mentioned earlier but also reduce outages, increase worker satisfaction and increase competitive edge.

DynaSis offers on-demand CIO services, where virtual CIOs can give you as much assistance with evaluation and planning as you need. We also offer affordable IT design services and installation services. To learn more, we invite you to give us a call.

[featured_image]

By the DynaSis Team

Proactive IT administration—where specialized technology continually scans a company’s networks and systems to detect and promote resolution of potential issues—is becoming increasingly popular among organizations seeking greater availability and employee productivity. With these technologies, small software “agents” installed on the systems scan for problems and either fix them automatically or alert technicians (often at a third-party provider’s location) who can step in and perform any necessary work.

The question for firms that haven’t adopted proactive IT administration then becomes, “Is this service really worth it?” For companies that can tolerate a lengthy amount of system downtime without having their business disrupted, the answer might be, “No.” However, most firms consider significant downtime unacceptable, making the theoretical benefits very attractive. DynaSis provides this service to its customers, so we thought it would be interesting to examine the hard-dollar benefits of proactive IT monitoring and management.

We found a 2010 survey, conducted by a leading provider of the software that enables services such as these, that indicates the ROI is very appealing. In the survey of 100 companies using proactive administration technologies, 60 percent reported that the solution proved its worth within 60 days of purchase.

A closer inspection revealed that the savings were significant in reduced IT expense, alone. Productivity gains for IT-specific functions amounted to 55 minutes a month, and included a reduction in the time to perform backups, system upgrades and patch installs, regulatory and compliance checks, and more.

Furthermore, these systems can also help with automated power management—where electricity flowing to idle machines is reduced. As a final “sustainability” bonus, proactive IT administration cuts down on technician visits to resolve problems, which reduces miles driven and, therefore, corporate carbon footprints.

These benefits were calculated completely independent of the “business downtime” metric we mentioned initially—the one that most companies consider in their decision. We see, nearly every day, how proactive system monitoring and management makes a major difference in IT outcomes in this area. For example, without a proactive monitoring solution in place, overloaded hardware can go undetected for months, until it causes a major outage that must be resolved with a lengthy technician visit. With proactive administration, technicians are alerted immediately, and they can often fix the problem remotely with a few clicks of the mouse. Users have no knowledge of the activity and company operations continue, unaffected.

The unobtrusive nature of proactive IT administration is one of the variables that sometimes makes it hard for us to illustrate the benefit of this service to our potential customers. If you have questioned the value of having remote intervention before trouble starts, we hope this information has helped. To learn more about proactive IT monitoring, management and administration, or to explore the services we offer in this area, we invite you to give us a call.

[featured_image]

By the DynaSis Team

Unified communications―a service found in many VoIP phone solutions that extends communication by incorporating location, availability and notification―is changing the way companies and their personnel do business. For example, no longer are customers forced to leave a message at 4:55 pm on Friday and wait for a call back on Monday, merely because their representative happened to be down the hall or out of the office.

Now, phone systems can find the representative, notify him or her, transfer the call to another device or location, and perform many other connectivity functions without any operator interaction. These systems are often called Unified Communications as a Service (UCaaS), because that is exactly what they are―a technology service, usually cloud-based, that works with the underlying VoIP hardware to achieve greater functionality.

UCaaS dovetails perfectly with the “always on” nature of business communications today, where personnel can be available via smartphone, tablet, laptop or other device, in any location. UCaaS is also a key component of customer service, as the technology does more than locate and notify a specific person; it also can connect a caller to any employee that matches a specific profile, for example, to connect a caller with a salesperson.

In 2015, we expect to see even more development in UCaaS, with such innovations as web-based, real-time communications (WebRTC) being used to unify communications even further. Already, workers are using their mobile devices to take and share on-the-fly videos with other staff, vendors and customers. When these solutions are built into a UCaaS platform, it extends business functionality and customer service even further.

According to Nemertes Research, 68% of all companies have implemented at least one cloud-based UCaaS solution. With customers expecting more connectivity and faster response times, and VoIP and UCaaS solutions becoming more affordable than ever, firms that remain fixed on legacy telephone systems risk losing their competitive edge. To learn more about VoIP or UCaaS, we invite you to give us a call.

[featured_image]

By the DynaSis Team

Despite the fact that mobile threats are increasing exponentially (mobile malware jumped 75% in 2014, alone), an astonishing percentage of mobile phones have no security protections, at all. Per a 2014 survey, only 14% of devices have anti-virus software, and 34% of mobile phone owners don’t even use the screen lock feature. As a result, organizations that allow users to store company data on their mobile devices without added precautions are exposing their company and its assets to extreme risk.

Implementing a mobile device management system is a key step in securing the enterprise against an onslaught of inadequately secure devices, but educating users to reduce the danger is equally important. As with desktop platforms, users are the weakest link in any security chain. Following are some suggestions that will help protect your employees―and your business.

  1. Simulate the danger. Create and distribute, from a non-office device, a phishing-type email and see who takes the bait. (Phishing emails are those that look real but the links take users to malicious websites. Phishing messages opened on mobile devices can infect laptops and corporate systems, as well, so companies and employees must take it seriously.) Making bad decisions is often a far more effective learning technique than hearing about them. Bottom of Form
  2. Create a Training Program. Some personnel don’t enact security measures (such as passcodes) on their phones because they don’t know how. Either teach them how to properly secure their phones or have your tech team secure them upon request. Additionally, show personnel how to create sufficiently robust passcodes and ask them to adhere to the recommendations.
  3. Outlaw “Jailbreaking.” Jailbreaking―the process of thwarting a device’s operating system so the user can install unauthorized apps―is predicted to be the cause of up to 75% of mobile security breaches by 2014 (per research firm Gartner). To secure corporate assets, companies should create policies against jailbreaking with strict penalties for non-compliance, including loss of device use (for corporate-owned phones), network access or other privileges.
  4. Implement a “No Consequences” Policy for Device Loss with Immediate Notification. Terrifying employees that they will be fired if they lose their corporate devices, or shaming them in front of their peers, makes them afraid to report device loss immediately. Their logic is that they may find the device and avoid reprisal. Any reporting lag time puts company information at risk, so companies should encourage employees to report device loss immediately―and should implement “find me” services for all phones operating on the corporate network.

For companies without a specially trained “mobile technology management” team, some of these activities―and others such as policy development and device security―can be complicated and confusing. To discuss implementing these and other protections for your firm, we invite you to give us a call.

[featured_image]

By the DynaSis Team

In last week’s article, we briefly mentioned virtualization in the context of our discussion on disaster recovery. For those unfamiliar with the concept, virtualization involves portioning one or more physical servers into multiple virtual machines (VMs), each of which can have its own file store, overall purpose and operating system and be isolated from the others. (Think of a VM as a sophisticated version of a partition, e.g. the C: or D: drive, on a PC.)

Virtualization greatly enhances operating efficiency and can provide a much more secure environment than a traditional server setup. Because each machine is discrete, with virtualization it can be easier to segregate and protect data of all types.

However, virtualization also increases security challenges, because there are more machines to manage―and potentially more ways for a cybercriminal to find his way in. This week, we’ll talk about a few challenges that business owners face in protecting their data in virtual environments.

  1. Sprawl: The ease of creating VMs is leading to sprawl― much like a suburb that blossoms because land is inexpensive and home building practices are more efficient. This potential for expansion makes effective management and security a greater challenge, especially if “tech-savvy” users can access systems and create their own VMs.
  2. Density: Current technologies enable a few physical servers to handle a very large load of virtualized servers, but firms focused on efficiency often virtualize those servers to maximum capacity. If not properly configured, these servers may lack the capacity for IT management functions, including security.
  3. Big Data: Compounding this issue, the explosion of data―and the need to save so much of it to a permanent storage location―is causing data stores to explode, with multi-terabyte data stores not unusual. Keeping tabs on―and ensuring appropriate security for―this much data is a daunting task.
  4. Application Integration: With the increase in virtualized applications, companies are challenged to provide the same level of protection and security that they could on a physical server.
  5. Granular Recovery: In the early days of virtualization, the traditional backup recovery approach was to remount an entire machine from the backup. In today’s environment, companies want file or object-level restores, but the security challenges of controlling such selective access and retrieve operations are considerable.

Fortunately, an appropriate combination of automated monitoring and hands-on management makes it much easier to ensure visibility, management and security of VMs. DynaSis has spent nearly three decades perfecting its approach to managing and securing IT systems at all levels, from mobile devices to servers, including virtualized environments.

[featured_image]

By the DynaSis Team

As the U.S. lumbers through yet another year of debilitating winter storms, it is becoming painfully clear to more businesses, every day, that disaster recovery isn’t an issue only for the “summer storm” months. Here in our corporate home of Atlanta, we have been spared the hardships of last winter, so far. However, February is one of the months when we are most likely to experience a winter storm.

Disaster recovery (as opposed to its longer-term cousin, business continuity) is about rapid resilience. Think of it as your “bounce back” metric. In the event an ice storm paralyzes your company and keeps employees at home, will your “doors” still open the next day? What if the key employee charged with ensuring continuity is stuck on the side of a road in his or her car?

These are questions many firms fail to consider when they think of disaster recovery. In our discussions with new customers and prospects, we are amazed at how many have narrowly defined plans that require every piece of the puzzle to fall into place, perfectly. As anyone who has experienced a disaster knows, crisis events never unfold perfectly.

Some companies accept the idea of closing their doors for a day or a week in the event of extreme weather or other closure event. Others cannot lose even an hour of operation. An alarming largely number haven’t tested their plans adequately or don’t have a step-by-step plan for recovery if the impetus for disaster is technology (e.g. a blown server) rather than weather.

In our virtual travels around the Internet, we found a 10-minute survey, prepared by the IT Disaster Recovery Preparedness (DRP) Council (a non-partisan advocacy group composed of IT business, government and academic leaders). It is designed for firms operating virtualized environments. However, the majority of its questions are germane to all businesses, virtualized or not. If you have a few minutes, take the quiz and see where your business places.

More importantly, make 2015 the year when you commit to ensuring your firm adheres to basic disaster recovery recommendations. On the IT side, double check your backup plan and find out how long it will take to restore your data, should you need it. Ensure your employees can access company data from either your server or your backup, securely and remotely. (Preferably, they should be able to access it from their phones―and know how to dock their phones to a laptop for Internet connection. Cellular providers are federally mandated to have a very high level of continuity and backup power.) If your business relies on ordering or other systems hosted in the cloud, explore the disaster recovery plans for your providers, too.

On the people side, assign someone in your company to work on evaluating and updating any materials you have. In the wake of even a small disaster, confusion over mission critical activities and chain of command brings many firms to their knees. Make a schedule to test your plan.

Finally, remember that you don’t have to handle these tasks alone. DynaSis offers four different disaster recovery solutions based on your level of outage tolerance. We can have you up and running, even in the event of a site disaster, in two hours or less. To explore the subject more thoroughly, we invite you to download our white paper on disaster recovery planning.

Many business owners are surprised to learn that significantly improving their business resilience can be achieved with minimal additional investment that can reduce both cost and risk. To learn more, give us a call.

[featured_image]

By the DynaSis Team

With a new year underway, companies large and small are making plans for the IT projects they believe will support their companies, their staffs and their business goals. Last year was definitely a watershed for technology, for everything from cyberattacks to cloud migration, making us wonder if these events and trends would have an impact on 2015 goals.

Scanning the horizon, we found that IT leaders appear to be focusing on a number of core technologies. Per a sampling of more than 1,000 CIOs in the 2015 TechTarget IT Priorities Survey, some of the “hot” initiatives for 2015 are mobility (36%), virtualization (30%) and at the top of the list, data/data center consolidation (40%).

Another survey of nearly 3,000 CIOs, by research firm Gartner, indicated similar results. For this group, business intelligence/data analytics is the number one priority for 2015, with 50% of respondents ranking it first. Next in line are infrastructure (hardware platforms) and mobility, with cloud computing not far behind.

In the report, Gartner noted that these investments aren’t simply cyclical IT refreshes. Rather, based on survey comments Gartner predicted, “For at least the next decade, deep technology-driven innovation will be the new normal for market leaders.”

In other words, these tech executives are building out the platform solutions that will help their companies function in a world where technology isn’t merely a business enabler―it’s the chief cog in the business wheel, with all other functions revolving around it.

These IT executives have realized that their data is spread among too many storage locations, or that the manner in which employees are storing data isn’t efficient or well organized. They know that they can do more with the data they have, but they need to robust solutions to help them leverage and analyze it.

They also understand that mobility is paramount to productivity, but they know they cannot continue to allow ad-hoc connections and random mobile solutions on their networks. They want a dedicated platform that unites and manages everything.

In sum, these executives want to build strong, secure frameworks that facilitate productivity, mobility and meaningful use of and access to data.

This is an approach we have long espoused. Businesses needs productivity (which also means availability) as well as mobility and security, and they don’t derive much value from only having one or two of these elements in place. The best solutions are those that encompass all three.

At DynaSis, we recognize that small and medium-sized businesses (SMBs) need powerful IT platforms even more than their larger counterparts do. We have been helping our customers deploy and maintain comprehensive, end-to-end solutions for years, whether on-premise through Digital Veins, in the cloud with ITility by DynaSis, or as a hybrid that gives them both (Ascend).

This year promises to be both exciting and challenging in the IT world, with many new developments, good and bad, and DynaSis is ready for all of them. We look forward to helping SMBs harness technology for their benefit without becoming victims of those who would use it for evil. To learn more or get started, please give us a call.

[featured_image]

By the DynaSis Team

Last week, we talked about the cloud and promised to explain some of the nuances between data storage, backup, sharing and syncing. These terms seem pretty obvious to computer users with a bit of experience, but solutions aren’t always what they seem. Furthermore, they can overlap. In this blog, we hope to clear the “cloud” of confusion that surrounds them.

Storage: Computer storage, aka data storage, is the media used to house data. Data can be stored on magnetic disks (“hard drives”) or on solid-state drives (flash drives). Storage can be on-premise (e.g. inside a PC, laptop or server) or it can be cloud based―residing on a server at a data center. (Data can also be stored on memory chips, but this data isn’t generally accessible, so we aren’t talking about it, here.)

Backup: A backup is an archive of a dataset. It can be a complete copy of a drive, or only a partial backup. Backups can be retained on the same media as regularly accessible storage, although some companies also still use a tape-based technology.

File Sharing: File sharing from a universal perspective means nothing more than literally letting someone have one of your files―whether via email, thumb drive or a cloud-based sharing service. However, here we are referring to a framework that supports the sharing of files. These can be third-party storage solutions, like DropBox, or they can be systems set up on a corporate server and network, with permissions that allow sharing of resources stored locally.

File Syncing: Synchronizing files is the process of ensuring that if a file is stored in two places, for example, on a desktop and a server, or on a laptop and in DropBox, that the version of the file is the same. It can also involve syncing a file between two users―for example, when two co-workers collaborate on a document together.

How Do They Intersect?

Storage can be used for backup purposes, and storage can be configured to enable file sharing. Dedicated programs can also sync files between two or more units of storage (e.g. hard drives). In other words, storage is the core element on which three activities―backup, file sharing and file syncing―rely. Even when you share a file via a method that seems impermanent, like email, the file is being retained in the data store of that email client.

Despite the fact that storage is an underlying “container” for the files involved in these activities, users cannot assume that all storage will perform these tasks equally well. That’s essentially the point we want to make with this educational journey.

For example, we’ve seen many users employ a solution like DropBox for file storage, syncing, sharing and backup. After all, DropBox makes backups of a user’s files and syncs them to his or her computer. That user can also share folders with other DropBox users. So, in essence, it can perform all four tasks.

Similarly, we have seen companies employ standard hard drives to perform backup, sometimes using backup software; other times by simply making a manual copy (what the IT world calls an image) of their drives every night.

Both of these approaches are technically backup, but they may not be managed in a manner that best protects the firm, nor will they necessarily be easy to restore in the event of a disaster. There are better solutions―dedicated “backup appliances” that include software that manages the backup and ensures its integrity, for example. There are also dedicated backup services―often connected to cloud-based storage― that incorporate recovery features, as well.

Similarly, file sharing and syncing can be done in DropBox or another file storage solution, but it doesn’t give the corporate entity as much control over where and to whom the files go. As a result, we recommend that no company―no matter how small―implement backup, sharing, or syncing solutions for company assets without guidance from a knowledgeable IT services firm, like DynaSis.

If you’d like to learn more about best-practices file backup, sharing and syncing―and even about storage solutions that give you more control, please give us a call.

[featured_image]

By the DynaSis Team

Cloud storage has penetrated many aspects of our lives in the past few years and is increasingly common at the corporate level, whether we realize it or not. When a cellular provider backs up your smartphone contacts, they are being stored in that company’s cloud. Virtually all the myriad “free storage” offers we may use daily, from Google Drive to DropBox, are cloud-based.

Yet, technically, all of these clouds are also on-premise based, too. How is that possible? Every “cloud” must be tied to a physical server in a physical location. There is no massive, amorphous and anonymous storage cluster that has been created within the Internet by storage contributions from random players. Every cloud resides at some company’s “premise” (physical location), even if that location is a data center (which, after all, is owned by someone).

The concern that most business owners, CIOs and others have about the cloud stems from the fact that these clouds and their data don’t reside on the company’s in-house server. They fear letting someone other than themselves retain control of their data, even if the other entity is operating under best-practices security protocols they may never be able to attain.

As we mentioned in a November 2014 blog, cloud storage and services are generally far safer and more secure than those residing in-house, especially for small and medium-sized businesses without big-boy security budgets. Furthermore, It’s highly likely that your employees are using cloud-based tools, even if your company isn’t, and they may be storing corporate files there.

Cloud security is not the focus of this blog, although we would be happy to discuss cloud security with you. The point of this discussion is that a cloud is nothing more than a dedicated amount of storage space on one or more servers that designated individuals can access remotely to store, sync and/or share files, run programs and perform other typical workplace functions. We say “designated” rather than authorized, because some clouds are open, meaning that the data there can be accessed by anyone, while others are access-restricted (secured).

In 2015, we will be talking about the cloud every month or so, introducing you to its various aspects and providing tips on how you can use it. Next week, we’re going to discuss the difference between storage, backup, sharing and syncing. These four operations are often intertwined, but whether they can happen in tandem with one another is up to the business owner to decide.

We believe that understanding “the cloud” and all of its possibilities is an important step to gaining confidence in this amazing technology. This knowledge will help you make the decisions that are right for your firm and enable you to secure your data to the greatest degree possible.

We’ll continue this discussion next week. In the meantime, if you would like to learn more about cloud computing or be introduced to one of our cloud productivity solutions, please give us a call.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram