DevOps Summit Internet of Things Expo DevOps Summit DevOps Summit DevOps Summit WebRTC Summit DevOps Summit DevOps Summit
2013 West Diamond Sponsor
2013 West Platinum Plus Sponsor
2013 West Platinum Sponsor
2013 West Gold Sponsors
2013 West Bronze Sponsors
2013 West Exhibitors
2013 West Oracle Workshop
2013 West Consortium Sponsor
2013 West e-Bulletin Sponsors
2013 West Association Sponsors
2013 West Media Sponsors
The SDDC Is Here! Now Help Push It Forward!
Experience IT-as-a-Service at SDDC Expo West. Learn and Contribute in the heart of Silicon Valley Nov 4-6
The Software-Defined Datacenter--the SDDC--sits firmly within the universe of cloud computing. Enterprise IT has become virtualized and re-assembled over the past decade, with software now able to define everything from specific services to entire datacenters.
Among the most dynamic aspects of the cloud computing revolution is the idea of IT-as-a-Service--presented to enterprise IT as an SDDC. Enterprise IT must grapple with legacy technology from the distant past, the recent past, and acquisitions, and eliminate the numerous--and massive--data and application silos that go with it. The SDDC is a breakthrough strategy that enables an integration of legacy with the latest in cloud computing.
The SDDC debate is far from over, so join us at SDDC Expo West Nov 4-6 at the Santa Clara Convention Center in the heart of Silicon Valley to hear the latest developments, strategies, and use cases involving the SDDC.
SDDC Expo West is co-located with Cloud Expo West, and will enable you mingle with your colleagues, contribute to the discussion, and help drive this truly 21st-century feature of enterprise IT forward.
We'll see you in Santa Clara Nov 4-6!
The Top Keynotes, the Best Sessions, a Rock Star Faculty, and the Most Qualified Delegates on ANY SDDC Event!
The software-defined data center provides an agile, reliable and secure foundation for cloud, while also delivering the intelligence and control needed to create sustainable business value.
SDDC is a premier conference that connects a wide range of stakeholders to provide a valuable and educational experience for all.
SYS-CON's Cloud Expo drew more than 7,000 attendees at Jacob Javits Center
Benefits of Attending the THREE-Day Technical Program
LEARN exactly why SDDC is relevant today from an economic, business and technology standpoint.
HEAR first-hand from industry experts how to govern access to compute, storage, and network resources based on corporate IT policies.
SEE how to control the data center.
DISCOVER what the core components of the Software-Defined Data Center are.
FIND OUT how to transform a traditional data center that is less flexible and costly to a cloud computing environment that is secure, virtualized and automated.
MASTER the three building blocks of the SDDC – network virtualization, storage virtualization and server virtualization.
It’s certainly no secret that cloud solutions have become an important and increasingly necessary part of how companies do business today. For enterprises, implementing cloud-based services can help boost productivity, enhance efficiency and reduce costs.
Private cloud solutions take the cloud concept one step further: A private cloud configuration looks and acts like the public cloud, giving enterprises the same levels of speed, agility and cost savings – but in a far more secure environment in which dedicated bandwidth and security are guaranteed.
After a couple of false starts, cloud-based desktop solutions are picking up steam, driven by trends such as BYOD and pervasive high-speed connectivity.
In his session at 15th Cloud Expo, Seth Bostock, CEO of IndependenceIT, cuts through the hype and the acronyms, and discusses the emergence of full-featured cloud workspaces that do for the desktop what cloud infrastructure did for the server. He’ll discuss VDI vs DaaS, implementation strategies and evaluation criteria.
Sharing data is a cornerstone of the scientific method because it makes it possible to replicate work. That foundation is mostly absent from data science, which makes obtaining and reusing knowledge more difficult than it should be.
Job postings for data scientists increased 15,000 percent between 2011 and 2012, and Gartner predicted that 63% of organizations would invest in Big Data this year. The communications, consumer, education, financial, healthcare, government, manufacturing, and retail sectors are all adopting business practices that are using data science to inform their activities and improve operations.
“Vote early and vote often.” Back in the 1920s and ’30s, when neither election technology nor oversight were as effective as they are today, and the likes of Al Capone were at work gaming the system, this phrase wasn’t a joke. It was a best practice.
If you want guaranteed results, what better way than to get people to the polls early, and then repeatedly, to vote for your candidate?
None of this sitting around until the end of the day, hoping that the election goes the way you want. Capone would tell you, “That’s for saps.”
What does this have to do with cloud computing? All too often we see IT teams taking a “buy it and hope it works” strategy when it comes to adopting cloud-based apps. They migrate their entire user base to the cloud on faith, assuming that they can worry about performance and availability issues later, if ever. After all, everybody in the company accesses the Internet today without issues so your cloud apps should work just fine, right?
The ability to embed prediction into multiple business processes amplifies the value that predictive analytics delivers. Yet many still see predictive analytics as a separate activity that is the responsibility of a small team of expert analysts.
This webinar will show how predictive analytics can be used throughout the organization by anyone looking for answers and how organizations can make predictive analytics the basis for better operational decisions with IBM SPSS Modeler.
Avere Systems has announced that it raised an additional $20 million in venture financing, bringing the total amount invested in the company to $72 million. The Series D funding round was led by Western Digital Capital, with participation from previous investors Lightspeed Venture Partners, Menlo Ventures, Norwest Venture Partners and Tenaya Capital. The funding will be used to accelerate sales, marketing and continued development of the company's hybrid cloud storage solutions.
Many mid-market companies have invested significant time and resources to secure and back up their servers, client computers, data, and overall network infrastructure in what was the traditional client-server setup. Now, what were considered emerging technologies just a few years ago, cloud computing and virtualization have arrived on the scene, bringing both significant benefits and new challenges. Find out more about this transition to get the most out of your virtual environment.
The threats facing network operators all over the world, spanning service providers, enterprises, cloud and hosting providers and mobile operators alike, are by no means stalling. While optimism is always the name of the game, we know all too well in security that trying to keep pace with the slew of attack vectors out there today is an unfortunate reality. As our 9th annual Worldwide Infrastructure Security Report reveals the magnitude of attacks is on the upswing once again and coupled with increasingly complex, multi-vector style attacks, the threat is all too real.
Octoblu on Tuesday emerged from stealth mode to announce its vision to provide an Internet of Things (IoT) platform for real-time connections and communication management across applications, people and physical devices.
The convergence of global trends, including cloud computing, the proliferation of mobile devices and social and business transactions over the Internet, are creating the IoT – where devices and sensors are connecting and exchanging information. This presents tremendous opportunity for companies to deliver new products and services; According to Gartner estimates, the IoT will include 26 billion units installed by 2020, and by that time, IoT product and service suppliers will generate incremental revenue exceeding $300 billion, mostly in services.
SYS-CON Events announced today that the NetofEverything Blog has been named “Media Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York.
NetofEverything brings you the latest on the Internet of Things. For more information, visit http://netofeverything.blogspot.com.
Internet of @ThingsExpo 2014 Silicon Valley, November 4–6, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading IoT industry players in the world.
When Swedish communications services provider TDC needed network infrastructure improvements from their disparate networks across several Nordic countries, they needed both simplicity in execution and agility in performance.
Our next innovation case study interview therefore highlights how TDC in Stockholm found ways to better determine root causes to any network disruption, and conduct deep inspection of the traffic to best manage their service-level agreements (SLAs).
There are 182 billion emails sent every day, generating a lot of data about how recipients and ISPs respond. Many marketers take a more-is-better approach to stats, preferring to have the ability to slice and dice their email lists based numerous arbitrary stats. However, fundamentally what really matters is whether or not sending an email to a particular recipient will generate value. Data Scientists can design high-level insights such as engagement prediction models and content clusters that allow marketers to cut through the noise and design their campaigns around strong, predictive signals, rather than arbitrary statistics.
SendGrid sends up to half a billion emails a day for customers such as Pinterest and GitHub. All this email adds up to more text than produced in the entire twitterverse. We track events like clicks, opens and deliveries to help improve deliverability for our customers – adding up to over 50 billion useful events every month. While SendGrid data covers only abo...
SYS-CON Events announced today that ActiveState, a global leader providing software application development and management solutions, has been named “Silver Sponsor” of DevOps Summit Silicon Valley, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Founded in 1997, ActiveState is a global leader providing software application development and management solutions. The Company’s products include: Stackato, a commercially supported Platform-as-a-Service (PaaS) that harnesses open source technologies such as Cloud Foundry and Docker; dynamic language distributions ActivePerl, ActivePython and ActiveTcl; and developer tools such as the popular Komodo Edit and Komodo IDE. Headquartered in Vancouver, Canada, ActiveState is trusted by customers and partners worldwide, across many industries including telecommunications, aerospace, software, financial services and CPG.
SYS-CON Events announced today that O'Reilly Media has been named “Media Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
O'Reilly Media spreads the knowledge of innovators through its books, online services, magazines, and conferences. Since 1978, O'Reilly Media has been a chronicler and catalyst of cutting-edge development, homing in on the technology trends that really matter and spurring their adoption by amplifying "faint signals" from the alpha geeks who are creating the future. An active participant in the technology community, the company has a long history of advocacy, meme-making, and evangelism.
Get the essentials about agile testing and service virtualization from several different perspectives – get an analyst’s take from Diego Lo Giudice of Forrester Research on how to remove agile testing bottlenecks and ways to calculate your potential return on investment for Service Virtualization. Hear about real customer implementations in the financial services, travel, and healthcare sectors. Learn from experts on how service virtualization is essential for mobile development, testing packaged applications (such as SAP), and more.
Jul. 18, 2014 12:00 PM EDT Reads: 1,413
Register and Save!
Save $500 on your “Golden Pass”! Call 201.802.3020
Silicon Valley Call For Papers Now Open
Submit your speaking proposal for
the upcoming SDDC Expo in
Silicon Valley! November 4-6, 2014]
Please Call 201.802.3021
events (at) sys-con.com
SYS-CON's SDDC Expo, held each year in California, New York, Prague, Tokyo, and Hong Kong is the world’s leading Cloud event in its 6th year, larger than all other Cloud events put together. For sponsorship, exhibit opportunites and show prospectus, please contact Carmen Gonzalez, carmen (at) sys-con.com.
Senior Technologists including CIOs, CTOs, VPs of technology, IT directors and managers, network and storage managers, network engineers, enterprise architects, communications and networking specialists, directors of infrastructure Business Executives including CEOs, CMOs, CIOs, presidents, VPs, directors, business development; product and purchasing managers.
Join Us as a Media Partner - Together We Can Rock the IT World!
SYS-CON Media has a flourishing Media Partner program in which mutually beneficial promotion and benefits are arranged between our own leading Enterprise IT portals and events and those of our partners.
If you would like to participate, please provide us with details of your website/s and event/s or your organization and please include basic audience demographics as well as relevant metrics such as ave. page views per month.
When people talk about Big Data, the emphasis is usually on the Big. Certainly, Big Data applications are distributed largely because the size of the data on which computations are executed warrants more than a typical application can handle. But scaling the network that provides connectivity between Big Data nodes is not just about creating massive interconnects.
In fact, the size of the network might be the least interesting aspect of scaling Big Data fabrics.
Much has been published about the Open Compute Project. Initiated by Facebook, it has become an industry effort focused on standardization of many parts and components in the datacenter. Initially focused on racks, power and server design, it has also added storage and now networking to its fold. Its goal is fairly straightforward: “how can we design the most efficient compute infrastructure possible”, a direct quote from its web site.
The focus of OCP has been mostly around hardware designs and specifications. If you look at the networking arm of OCP, you find several Top of Rack (ToR) ethernet switch hardware designs donated by the likes of Broadcom, Mellanox and Intel. By creating open specifications of hardware designs for fairly standard ethernet switches, the industry can standardize on these designs and economics of scale would drive down the cost to create and maintain this hardware. A noble goal and there are many opinions on both sides of this effort. Mostly referred to as...
Some people are never satisfied. These fearless agents of change are everywhere. They're informed, confident and willing to experiment. They seek out the best business technology solution for the job at hand. They act on instinct. Yes, you could say that they're driven.
However, they're also at risk of being labeled as "rogue employees" because they ordered a software-as-a-service (SaaS) offering and perhaps expensed it without prior approval. Sometimes they're the champion of progressive projects that are referred to as Shadow IT -- intentionally bypassing their company's formal evaluation and procurement process. How can this happen?
Is it just because their activities are tolerated, or are they being encouraged? If so, by whom? Why would any business leader applaud a team member that breaks the rules? Maybe, the simple answer is that staying within the confines of the status-quo won't enable a top-performer to fully apply their talent, achieving their absolute best.
One of the primary principles of object-oriented programming (OOP) is encapsulation. Encapsulation is the way in which the state of an object is protected from manipulation in a way that is not consistent with the way the variable is intended to be manipulated. The variable (state) is made private, that is to say only the object itself can change it directly. Think of it as the difference between an automatic transmission and a standard (stick). With the latter, I can change gears whenever I see fit. The problem is that when I see fit may not be appropriate to the way in which the gears should be shifted. Which is how engines end up being redlined.
Last week Greg Ferro (@etherealmind) wrote this article about his experience with scripting as a method for network automation, with the ultimate conclusion that scripting does not scale.
Early in my career I managed a small network that grew to be a IP over X.25 hub of Europe for a few years providing many countries with their first Internet connectivity. Scripts were everywhere, small ones to grab stats and create pretty graphs, others that continuously checked the status of links and would send emails when things went wrong.
While it is hard to argue with Greg’s complaints per se, I believe the key point is missing. And it has nothing to do with scripting. In a reply, Ivan’s last comment touches on the real issue.
Cloud computing has finally come into its own. While we’ve been hearing for 8 years or more that cloud computing would one day take over the enterprise, the fact of the matter is that it’s been slow going.
While the spread of cloud computing solutions hasn’t been as rapid as many early proponents predicted it would be, we are now to a place where cloud solutions are seen as viable for most organizations, and are being utilized regularly.
It's an application world; a world that is rapidly expanding. With new opportunities and markets arising driven by mobility and the Internet of Things, it is only going to keep expanding as applications are deployed to provision, license, and manage the growing sensors and devices in the hands of consumers.
Applications are not isolated containers of functionality. No application winds up in production without a robust stack of resources and services to support it. Storage and compute, of course, are required, but so are the networking - both stateless and stateful - services that provide for scale, security and performance.
The challenge in architecting, building, and managing data centers is one of balance. There are forces competing to both push together and pull apart datacenter resources. Finding an equilibrium point that is technological sustainable, operationally viable, and business friendly is challenging. The result is frequently a set of compromises that outweigh the advantages.
The datacenter represents a diverse set of orchestrated resources bound together by the applications they serve. At its most simplest, these resources are physically co-located. At its extreme, these resources are geographically distributed across many sites. Whatever the physical layout, these resources are under pressure to be treated as a single logical group.
Despite the hype and drama surrounding the HTTP 2.0 effort, the latest version of the ubiquitous HTTP protocol is not just a marketing term. It's a real, live IETF standard that is scheduled to "go live" in November (2014).
And it changes everything.
There are a lot of performance enhancing related changes in the HTTP 2.0 specification including multiplexing and header compression. These are not to be overlooked as minimal updates as they significantly improve performance, particularly for clients connecting over a mobile network. Header compression, for example, minimizes the requirement to transport HTTP headers with each and every request - and response. HTTP headers can become quite the overhead, particularly for those requests comprised simply of a URL or a few bytes of data.
One of the benefits of SDN is centralized control. That is, there is a single repository containing the known current state of the entire network. It is this centralization that enables intelligent application of new policies to govern and control the network - from new routes to user experience services like QoS. Because there is a single entity which has visibility into the state of the network as a whole, it can examine the topology at any given point and make determinations as to where this packet and that should be routed, how it is prioritized and even whether or not it is allowed to traverse the network.
If LinkedIn profiles are any indication, User Experience (frequently shortened to UX) is the new orange. Indeed, across all manners of technology, there is an increasing focus on improving user experience. Driven in part by Apple’s success on the consumer side, it would appear that IT infrastructure vendors are getting in on the action. In the quest to simplify our collective lives and differentiate in a space more defined by cost than capability, the user is taking a more prominent role.
As it should be.
In the video at this link and embedded below I provide some context on new approaches to data can enhance outcomes for public sector organizations, with a focus on real world use cases. I also mention key requirements which apply at most government organizations for their data and how organizations are addressing their unique requirements with technology provided by Cloudera:
Inarguably one of the drivers of software-defined architectures (cloud, SDDC, and SDN) as well as movements like DevOps is the complexity inherent in today's data center networks. For years now we've added applications and services, and responded to new threats and requirements from the business with new boxes and new capabilities. All of them cobbled together using traditional networking principles that adhere to providing reliability and scale through redundancy.
Some people believe good or bad things always happen in threes. I believe you will always be able to find three (and probably more) things that are good or bad and somewhat related, but sometimes I get surprised by the apparent coincidental appearance of several closely related “things”. Last week the folks at networkheresy.com posted a second installment of their “policy in the datacenter” discussion, Cisco announced the acquisition of tail-f and internal to Plexxi we had several intense architectural discussions around Configuration, Provisioning and Policy management. Maybe we can declare June CP&P month for networking.
It is mostly accepted that configuration deals with the deployment of devices and applications within an infrastructure. For network devices, it covers the portions of creating a fabric, protocols to maintain this fabric, access and control to the device itself, management connectivity etc. Once a network device is configured, it is a functioning element in a networ...
Compute started its major architectural transition several years ago with the introduction of virtualization. If you pay attention to any of the IT noise today, it should be clear that storage and networking are going through their own architectural evolutions as well. But another shift is also underway: applications are fundamentally changing as well.
An interesting dynamic in all of this is that it is near impossible for each of the four major IT areas to undergo simultaneous, coordinated evolution. Change is hard enough on its own, but changing multiple variable at once makes it difficult to anchor to anything substantial. And when change does occur along multiple fronts at the same time, the task of determining causation for newfound results is challenging at best.