Achieve IT as a Service with Software-Defined Data Centers (SDDC)
Join Us at SDDC Expo New York
Software-defined data centers are considered by many to be the next step in the evolution of virtualization and cloud computing as it provides a solution to support both legacy enterprise applications and new cloud computing services
Learn about this architectural approach to IT infrastructure at SDDC Expo – being held June 10-12, 2014 at the Javits Center in New York, NY. At SDDC Expo thought leaders and practitioners, researchers and analysts, vendors and customers will provide a diverse mix of views that will foster new discussions not just within the movement, but also beyond it.
See you in June!
The Top Keynotes, the Best Sessions, a Rock Star Faculty, and the Most Qualified Delegates on ANY SDDC Event!
The software-defined data center provides an agile, reliable and secure foundation for cloud, while also delivering the intelligence and control needed to create sustainable business value.
SDDC is a premier conference that connects a wide range of stakeholders to provide a valuable and educational experience for all.
SYS-CON's Cloud Expo drew more than 7,000 attendees at Jacob Javits Center
Benefits of Attending the THREE-Day Technical Program
LEARN exactly why SDDC is relevant today from an economic, business and technology standpoint.
HEAR first-hand from industry experts how to govern access to compute, storage, and network resources based on corporate IT policies.
SEE how to control the data center.
DISCOVER what the core components of the Software-Defined Data Center are.
FIND OUT how to transform a traditional data center that is less flexible and costly to a cloud computing environment that is secure, virtualized and automated.
MASTER the three building blocks of the SDDC – network virtualization, storage virtualization and server virtualization.
HP experts explain how enterprise IT operators and planners can keep their data centers from spinning out of control despite new requirements, leverage the best of converged systems, and improve efficiency.
As software-driven data centers have matured and advanced to support unpredictable workloads like hybrid cloud, big data, and mobile applications, the ability to manage and operate that infrastructure efficiently has grown increasingly difficult.
At the same time, as enterprises seek to rationalize their applications and data, centralization and consolidation of data centers has made their management even more critical -- at ever larger scale and density.
MongoDB has announced that Crittercism, a San Francisco based startup, is using MongoDB as the data store for its mobile application performance management (mAPM) solution. Crittercism helps developers and enterprises build better, faster, smarter, high performance apps. The MongoDB-based solution currently handles 3 billion requests per day from applications run by over 800 million monthly active users.
Crittercism processes massive amounts of data using MongoDB, including detailed diagnostics on application performance; error reporting; cloud services; network breadcrumbs; device, carrier and OS trends; different application versions; and user behavior. With MongoDB, Crittercism can easily capture, store and analyze massive amounts of unstructured mobile data in real-time that is steadily increasing in volume and granularity.
When was the last time you had a technology conversation that did not include the word ‘cloud’ in it? Gartner predicts that by 2016 the bulk of IT spend will be for the cloud. Gartner also believes that ‘nearly half of large enterprises will have hybrid cloud deployments by the end of 2017.’ Cloud technology continues to evolve at breakneck speeds and business wants to move to the cloud equally as fast. This presents significant challenges for technologists who need ensure the business doesn’t go crashing into a brick wall while moving at these speeds.
Just by its mere name, virtualization conjures up images of expensive, complicated technologies that for the most part only the largest companies can grasp, when in fact there are many benefits that virtualization can bring to small businesses, two being IT efficiency and cost savings. The likes of VMware Workstation and Oracle's VirtualBox both have brought this technology well within the budgets of small businesses.
The following suggestions will help you gain the most benefit of virtualization within your small business.
SYS-CON Events announced today that Ambernet Technologies, the innovative “Cloud Management Center” company, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York.
Ambernet Technologies is a leading global provider of cloud management software (CloudTruOps) and IT professional services to the enterprise, service provider and government markets. CloudTruOps is the industry’s first infrastructure-independent and service-aware software solution that provides a fully transactional single pane of glass for cloud service provisioning & orchestration, governance, policy, security, performance, self-service storefront, and billing/chargeback for multiple clouds. Ambernet's IT professional services provide consulting services, solutions, and support. Ambernet is a global company with headquarters in Dallas, Texas and regional offices in Toronto, Canada, and Bangalore, India.
Google Analytics, Adobe Omniture, IBM Coremetrics, and most of today's state of the art Web Analytics products can only tell you the last or first touch of your customers.
In his session at 5th Big Data Expo, Joel Horwitz, Product Strategist and Experienced Analytics Professional at Alpine, will use advanced analytics attribution analysis and clickstream data collected from toolbars and web browsers to discover more touchpoints in the customer journey. He'll show example use cases with Hive, Hadoop, and Alpine to illustrate how this analysis was completed in less than two weeks’ time.
Big Data and its most prominent technical ingredient, Machine Learning, are all the rage these days, as IT industry is trying to convince companies technology revolution is underway. ("If you are not doing it, your competitors sure are, and by the time you realize it, it will be too late"). Data fracking, i.e. Big Data, is 21 century new oil of that will power and grease stalled industries and reignite growth, or so the story goes.
While advanced analytics (it comes under various names - predictive analytics, data mining, and data science, more recently) is great and in use for decades now, we are well reminded that it is just - analytics. As such, it is a natural next step beyond straightforward SQL queries, but falls short of being anything near revolutionary. We are well advised to listen to the words of Larry Ellison (on IT industry in general and cloud in particular, but applicable to many new IT initiatives):
Technology is integrated into the DNA of business and of society itself. Some of the most recent entrants are cloud computing, Big Data, and mobile. By putting these technologies together, we arrive at an intersection that creates a "sweet spot" where innovation can be enhanced within your organization.
In his session at 5th Big Data Expo, Michael Gendron, professor of Information Systems at CCSU, will put those pieces together and give you tools to move your organization toward that sweet spot. He will provide an overview for each of the key components and an explanation of how these components support the BI Sweet Spot. The best practices discussed in this presentation represent the first-of-its-kind work in BI research that considers the inter-relationships and the combined effect of mobile, cloud and Big Data.
My first experience with an “inverted yield curve” was in 2001 just prior to the tech bubble bursting. I was working on a financial portal for an investment bank and one of the charts was a yield curve. It looked odd all of a sudden, so I looked it up in a book of financial terms. An inverted yield is indicated when interest rates for short-term capital are higher than interest rates for long-term capital. In other words, people are willing to pay a significant price to alleviate short-term concerns because they’re focused on the now and not so concerned about one year, three years, five years, or thirty years from now. Inverted yield curves some believe signal disruption in financial markets. On the surface, Cloud First seems to signal the disruption that is cloud computing. To take this metaphor a little further, this inversion of Cloud First from “Cloud Never” suggests to me an inverted set of concerns. Does Cloud First prioritize an immediate need to say “something” about the cloud...
The term "software defined" has taken many forms in recent months from Software Defined Datacenter (SDDC), Software Defined Infrastructure (SDI) to even component vendors adopting the tagline to exalt their own agenda with Software Defined Networking (SDN) and Software Defined Storage (SDS). Yet ironically the majority of the vendors adopting the tagline are also dealing with infrastructure product lines that a "software defined" approach is aiming to make irrelevant.
The emergence of the cloud illuminated to the industry that the procurement, design and deployment of the infrastructure components of network, storage and compute were a hindrance to application delivery. The inability for infrastructure components to not be quickly and successfully coordinated together as well as be automatically responsive to application needs has led many to question why traditional approaches to infrastructure are still being considered. In an attempt to safeguard themselves from this realisation, i...
Citrix Ready program helps customers identify third-party solutions that are recommended to enhance virtualization, networking and cloud computing solutions from Citrix. Appcore AMP completed a rigorous verification process to ensure compatibility with Citrix CloudPlatform™, providing confidence in joint solution compatibility.
“Given the success and user growth of Appcore AMP since early 2013, the time was right for us to further enhance our ongoing relationship with Citrix. We are extremely proud that Appcore AMP is now Citrix Ready verified. Citrix CloudPlatform provides Appcore AMP with the most relevant cloud orchestration tool designed for service providers”, stated Appcore CEO Jeff Tegethoff.
In the past 10 years, numerous solutions have been developed to deal with limitations in the leading relational databases. But knowing which offering works best per use case can be difficult to decipher.
Avi’s usual recommendation for Hadoop-type jobs is the MapR distribution. He have found that MapR is similarly priced but offers higher performance and a native NFS interface to Hadoop that can perform at hundreds of gigabits at scale, utilize 24 to 90 disks (depending on CPU and RAM), and allow basic Unix tools to be used for analyzing subsets of data in a more ad-hoc fashion. At the end of his session, Avi will make recommendations based on common use cases, data back ends, and application requirements.
DigitalOcean, a cloud computing startup, has closed a $37.2MM Series A led by Andreessen Horowitz with participation from existing investors IA Ventures and CrunchFund.
DigitalOcean believes in making complex infrastructure simple and providing customers with a seamless experience. New users can deploy a cloud server in 55 seconds with an intuitive control panel interface, which can be replicated on a larger scale with the company's straightforward API.
The evolutionary nature of mobile presents a security-centric challenge for businesses with corporate content on these devices. Enterprises put themselves at risk when users access sensitive information through email and applications across smartphones and tablets, while mobile. Organizations can choose to ignore this security threat or enhance employee productivity through secure corporate containers.
In his session at 14th Cloud Expo, Eric Owings, an enterprise account executive at AirWatch®, will discuss best practices and strategies to ensure global security and workforce enablement by leveraging enterprise mobility management (EMM) across the enterprise. He will also provide attendees with a deeper understanding of enterprise mobility in a connected ecosystem, while ensuring security and compliance in the cloud.
AppZero, the fastest, most flexible way to move server applications, announced today it has signed a channel partnership agreement with Dot Net Solutions, one of the UK's top Microsoft Cloud partners. The agreement is part of a worldwide channel strategy implemented by AppZero to ensure comprehensive global coverage of its award-winning application migration software in the run-up to Windows Server 2003 (WS2003) end of support. Microsoft recommends AppZero Enterprise for application migration for various applications from Windows Server 2003 to either Windows Server 2012 or Windows Azure.
Mar. 7, 2014 08:00 AM EST Reads: 1,339
Register and Save!
Save $700 on your “Golden Pass”! Call 201.802.3020
New York City Call For Papers Now OPEN
SUBMIT your speaking proposal for the upcoming SDDC Expo in
New York, NY!
[June 10-12, 2014]
Please Call 201.802.3021
events (at) sys-con.com
SYS-CON's SDDC Expo, held each year in California, New York, Prague, Tokyo, and Hong Kong is the world’s leading Cloud event in its 6th year, larger than all other Cloud events put together. For sponsorship, exhibit opportunites and show prospectus, please contact Carmen Gonzalez, carmen (at) sys-con.com.
Senior Technologists including CIOs, CTOs, VPs of technology, IT directors and managers, network and storage managers, network engineers, enterprise architects, communications and networking specialists, directors of infrastructure Business Executives including CEOs, CMOs, CIOs, presidents, VPs, directors, business development; product and purchasing managers.
Join Us as a Media Partner - Together We Can Rock the IT World!
SYS-CON Media has a flourishing Media Partner program in which mutually beneficial promotion and benefits are arranged between our own leading Enterprise IT portals and events and those of our partners.
If you would like to participate, please provide us with details of your website/s and event/s or your organization and please include basic audience demographics as well as relevant metrics such as ave. page views per month.
The term "Big Data" is quite possibly one of the most difficult IT-related terms to pin down ever. There are so many potential types of, and applications for Big Data that it can be a bit daunting to consider all of the possibilities. Thankfully, for IT operations staff, Big Data is mostly a bunch of new technologies that are being used together to solve some sort of business problem. In this blog post I'm going to focus on what IT Operations teams need to know about big data technology and support.
Google’s pursuit of self-driving cars has been well documented over the years. The promise of fleets of self-driving vehicles that could potentially make driving safer while simultaneously shortening commute times makes it one of the most attractive futures technologies around. But where would self-driving cars be adopted first?
While there will certainly be some people with deep pockets in Silicon Valley who will want to be early adopters, commuter driving is not likely the place where this catches on first. A few self-driving cars on the freeway will not change the commute times in a meaningful way because they would be minorities amidst a sea of normal cars operating by the same people who have made commuting a nightmare up until this point. With commute times unchanged, it means that individuals would still have the same commute. The only difference is that they could text or read or do whatever while they sit in stop-and-go traffic.
There are people who take life slowly, accept it on its terms, meditate, read about meditation, and internalize books like "Wherever You Go, There You Are," a bestseller by a guy with a hyphenated name. I'm more of a Malcolm Gladwell fan -- Outliers or his newest book “David and Goliath: Underdogs, Misfits, and the Art of Battling Giants."
Maybe I'm just naturally the type that will spend 10,000 hours practicing, sharpening, mastering, thinking about the advantages a startup has over leaders in the market, getting back in the game. I've been in enterprise software for 30 years and the cloud since its formation. A long time ago, I faced the fact that I don't have the basketball gene (although I do coach several boys' sports teams) and I'm pretty sure meditating will not increase my productivity as a start-up CEO. I could be wrong on the meditating part.
LIn 1974 a specification was developed that would, eventually, launch what we know of as "The Internet." That specification was TCP and though it is often overshadowed by HTTP as the spark that lit the fire under the Internet, without TCP HTTP wouldn't have a (transport) leg to stand on.
Still, it would take 10 years before the Internet (all 1000 hosts of it) converted en masse to using TCP/IP for its messaging. Once that adoption had occurred, it was a much shorter leap for the development of HTTP to occur and then, well, your refrigerator or toaster can probably tell you the rest at this point.
That makes TCP 40 years old this year. And despite the old adage you can't teach an old dog new tricks, TCP has continued to evolve right along with the needs of the applications that rely so heavily on its reliable transport mechanism.
Disaster recovery is about being able to get your business back up and running as quick as you can after the disaster happens. Throughout this series, my teammates have focused on the Infrastructure side of the house, servers, virtual machines, etc. You can see the full series here: Disaster Recovery Planning for I.T. Pros
However I have a question, what about the desktops? As a reminder my good friend Jennelle posted series of questions in part 1 of this series: Disaster Recovery for IT Pros- How to Plan, What are the Considerations-
However I have a question, what about the desktops? As a reminder my good friend Jennelle posted series of questions in part 1 of this series: Disaster Recovery for IT Pros- How to Plan, What are the Considerations-
Here are my three main questions to get started:
1.What is the most important application or services in each business unit or for the business overall?
2.How much downtime is acceptable?
3.How much data loss is acceptabl...
The goal of a load test is to replicate the traffic & conditions your app experiences in production as realistically as possible
As a tester, you understand how important it is to create the most realistic load test possible to provide confidence that your web application won’t fail in the field. But how do you know where your load should come from to produce realistic results?
Load testing from dedicated infrastructure inside your own datacenter is the most common and typically the most accessible way to wring out performance issues in your applications. This type of testing should be performed as part of your regular testing process.
Back in the noughties I made the conscious decision as a Storage guy to immerse myself into what was being termed server virtualization and understanding the product offerings of a relatively new company named VMware. To this day I remember the incredulous looks and responses I received from my Storage counterparts, who were convinced that the VMware fad was nothing more than a system admin tool. Indeed the organizations that were early adopters of VMware ended up assigning the virtualization responsibility to the system admins team; at no point was there ever a thought that a dedicated virtualization team could or should be established. Fast forward to 2014 and virtualization teams are the norm and Storage administrators are constantly being hard pressed to have a better understanding of VMware as they provision and manage virtualized environments. Such a culture change was unthinkable 10 years ago, yet here we are again with the emergence of another silo, the Converged Infrastructure...
#sdas #webperf New TCP algorithms and protocols in a platform net improved performance for apps
The original RFC for TCP (793) was written in September of 1981. Let's pause for a moment and reflect on that date.
When TCP was introduced applications on "smart" devices was relegated to science fiction and the use of the Internet by your average consumer was still more than two decades away.
Yet TCP remains, like IP, uncontested as "the" transport protocol of ... everything.
New application architectures, networks, and devices all introduce challenges with respect to how TCP behaves. Over the years it's become necessary to tweak the protocol with new algorithms designed to address everything from congestion control to window size control to key management for the TCP MD5 signature option to header compression. There are, in fact, over 100 (that's where I stopped counting) TCP-related RFCs on the digital book...
If the future of federated controllers is based on service layering, then how do multiple controllers manage the same device? Is there a requirement for state synchronization? Do they share information about device operation or configuration? Is there a need for controllers that are managing different aspects of the same device to be coordinated in what they do?
As with anything worth asking, the answer is: it depends. It is certainly the case that in a tiered controller architecture where one controller is managing things like basic configuration and another controller is working higher up the stack (managing a service, for instance), there is not a need to keep high fidelity replications of state across both controllers. Specific configuration information that might be important to the lower-level controller can likely be spared from the services controllers. In the cases that the services controller needs to know things like configuration state (it might be required to know VLAN ...
Incredibly it's that time of year again when voting commences for the Top Virtualization blog of the year and I've just been pinged a note that The SANMAN blog has been nominated again for voting! Last year's nomination was also a nice surprise as the blog ended up being a New Entry in the charts at 172 - sure it's not a One Direction hit single that went straight to number one but 172 is still a chart number Kajagoogoo would have been proud of (-;
In part one of this three-part series I summed up how the way we produce and consume data has evolved over the last three decades, creating a need for new storage methodologies that can help enterprises store and effectively manage massive pools of data. I concluded that the immutable nature of unstructured data storage holds the key to solving the scalability and availability problems of traditional file storage.
Unstructured data has traditionally been stored in file-based systems, which enable users to access files simultaneously and modify them. This is great functionality for office environments, where multiple users might indeed be updating each other's spreadsheets, but it is complete overkill when storing data that will probably never be changed again. DDN developed our Web Object Scaler (WOS) solution with this "unchanging" aspect of data in mind.
As SDN moves closer to large-scale deployments, the issue of controller scaling is becoming a hotter topic. The consensus seems to favor some form of distributed cluster environment, likely in the form of federated clusters. But how should these federations be formed?
The first thing to think about is the blast radius for controllers. Even if a controller could scale to manage every node in the network, it is unlikely that you would want that to be the design. It simply creates too large a maintenance and failure domain. Even with a redundant controller, the issues with expansive failure domains are prohibitively scary.
So most people understand that typical large-scale deployments are likely to utilize a multi-controller architecture. But how do you decide how many controllers you need?
If you are new school or old school, then will you be stuck on those school's of thought or advanced to the current and future schools?
From the old school folks you will hear things along the lines of that is how we do or did it. Also you might hear things along the lines of lets use what we have as long as we can make it work to fix problems while learning from mistakes. Also from old school you may here things like new school is only focused on the newest latest greatest shiny technology. Not to mention themes such as we have to stick around and clean up and take care of the mess left when new schoolers move to their next focus.
As promised in our earlier post, here are the final predictions we saw making the rounds in the blogosphere at the start of the year.
Many are predicting that Microsoft will get more serious about the cloud. Amazon dominated the cloud news in 2013, but 2014 will be a good year for Microsoft and Google, said Dan Sullivan on Search Cloud Computing. "Microsoft is paving the way for hybrid clouds with Windows Server 2012 R2 and Windows Azure Pack. By the end of 2014, we should have a better understanding of good practices for managing workloads across hybrid Azure clouds.
Bernard Golden on CIO echoes the sentiment. "In a way, AWS has had a free ride to this point. Most of its competition has come from the hosting world, and, as noted, is unable to take a software approach to the domain. The inevitable result: AWS has improved, and grown, much more rapidly than other CSPs. That unopposed free run will end in 2014. Both Google and Microsoft have AWS in their crosshairs and are rolling ou...
A lot of security-minded folks immediately pack up their bags and go home when you start talking about automating anything in the security infrastructure. Automating changes to data center firewalls, for example, seem to elicit a reaction akin not unlike that to a suggestion to putting an unpatched Windows machine directly on the public Internet.
At RSA yesterday I happened to see a variety of booths with a focus on .. .logs. That isn't surprising as log analysis is used across the data center and across domains for a variety of reasons. It's one of the ways databases are replicated, it's part of compiling access audit reports and it's absolutely one of the ways in which intrusions attempts can be detected.