Internet of @ThingsExpo DevOps Summit Internet of @ThingsExpo
2014 West Exhibitors
DevOps Summit DevOps Summit
2014 West Power Panel Sponsor
2014 West Mobile App Sponsor
2014 Diamond Sponsor
2014 East Platinum Plus Sponsors
2014 East Platinum Sponsors
2014 East Gold Sponsor
2014 East Track Sponsor
2014 East Silver Sponsors
DevOps Summit DevOps Summit
2014 East Bronze Sponsors
Internet of Things Expo
2014 East Exhibitors
DevOps Summit Internet of Things Expo DevOps Summit DevOps Summit DevOps Summit WebRTC Summit DevOps Summit DevOps Summit
2013 West Diamond Sponsor
2013 West Platinum Plus Sponsor
2013 West Platinum Sponsor
2013 West Gold Sponsors
2013 West Bronze Sponsors
2013 West Exhibitors
2013 West Oracle Workshop
2013 West Consortium Sponsor
2013 West e-Bulletin Sponsors
2013 West Association Sponsors
2013 West Media Sponsors
The SDDC Is Here! Now Help Push It Forward!
Experience IT-as-a-Service at SDDC Expo West. Learn and Contribute in the heart of Silicon Valley Nov 4-6
The Software-Defined Datacenter--the SDDC--sits firmly within the universe of cloud computing. Enterprise IT has become virtualized and re-assembled over the past decade, with software now able to define everything from specific services to entire datacenters.
Among the most dynamic aspects of the cloud computing revolution is the idea of IT-as-a-Service--presented to enterprise IT as an SDDC. Enterprise IT must grapple with legacy technology from the distant past, the recent past, and acquisitions, and eliminate the numerous--and massive--data and application silos that go with it. The SDDC is a breakthrough strategy that enables an integration of legacy with the latest in cloud computing.
The SDDC debate is far from over, so join us at SDDC Expo West Nov 4-6 at the Santa Clara Convention Center in the heart of Silicon Valley to hear the latest developments, strategies, and use cases involving the SDDC.
SDDC Expo West is co-located with Cloud Expo West, and will enable you mingle with your colleagues, contribute to the discussion, and help drive this truly 21st-century feature of enterprise IT forward.
We'll see you in Santa Clara Nov 4-6!
The Top Keynotes, the Best Sessions, a Rock Star Faculty, and the Most Qualified Delegates on ANY SDDC Event!
The software-defined data center provides an agile, reliable and secure foundation for cloud, while also delivering the intelligence and control needed to create sustainable business value.
SDDC is a premier conference that connects a wide range of stakeholders to provide a valuable and educational experience for all.
SYS-CON's Cloud Expo drew more than 7,000 attendees at Jacob Javits Center
Benefits of Attending the THREE-Day Technical Program
LEARN exactly why SDDC is relevant today from an economic, business and technology standpoint.
HEAR first-hand from industry experts how to govern access to compute, storage, and network resources based on corporate IT policies.
SEE how to control the data center.
DISCOVER what the core components of the Software-Defined Data Center are.
FIND OUT how to transform a traditional data center that is less flexible and costly to a cloud computing environment that is secure, virtualized and automated.
MASTER the three building blocks of the SDDC – network virtualization, storage virtualization and server virtualization.
Everything from jet engines to refrigerators is joining the Internet of Things, pushing networks to the brink. In a new Boeing 747, almost every part of the plane is connected to the Internet, recording and, in some cases, sending continuous streams of data about its status. General Electric Co. has said that in a single flight, one of its jet engines generates half a terabyte of data.
Sovereign and Virtustream partnered to deliver Sovereign’s SAP® BusinessObjects driven Analytics and Business Intelligence solutions on Virtustream's xStream™ cloud management platform (CMP). This alliance will drive operational efficiencies and competitive advantage for Sovereign’s enterprise clients who will now receive the benefits of a cloud based solution including increased efficiency and enhanced security and performance.
Through this partnership, Sovereign can deploy the infrastructure required to support Analytics and Big Data projects to their customers, reducing the time IT must spend analyzing and implementing a myriad of complex hardware and infrastructure solutions typical in a traditional deployment model.
“Business leaders need access to analyze data as simply and efficiently as they access email and that requires a shift in traditional IT driven Business Intelligence projects,” said Mike Wasserman, General Manager of Sovereign System’s Government and Analytics Divis...
Macrotron Systems is an ISO 9001:2008 registered Electronic Manufacturing Services company specializing in PCB assembly/test, ultrasonic welding, laser marking/engraving and pad printing. Operations are in Fremont, California. Macrotron offers a competitive advantage in the electronics manufacturing service marketplace from its Silicon Valley facility. Macrotron develops OEM partnerships by consistently fulfilling customer requirements for dependable and cost effective services and products. To this end, Macrotron has invested in current technology to establish itself as the ideal contract partner of the Electronics OEM customer.
The impact of DevOps in the cloud era is potentially profound. DevOps helps businesses deliver new features continuously, reduce cycle time and achieve sustained innovation by applying agile and lean principles to assist all stakeholders in an organization that develop, operate, or benefit from the business’ lifecycle.
In his session at DevOps Summit, Prashanth Chandrasekar, General Manager at Rackspace, will exam whether / how companies can work with external DevOps specialists to achieve "DevOps elasticity" and DevOps expertise at scale while internally focusing on writing code / development.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
Headquartered in Santa Monica, California, Bitium was founded by Kriz and Erik Gustavson. The 1,500 cloud-based application using Bitium’s analytics, app management, and single sign-on services include bug trackers, customer service dashboards, Google Apps, and social networks. The firm states website administrators can do multiple tasks online without revealing passwords. Bitium’s advisors include Microsoft’s former CMO and the former senior vice president of strategy, the founder and CEO of Like.com, a product strategist at IBM and Oracle, Hootsuite’s CEO, and the founder and CEO of KISSMetric, among others. More about Bitium can be found on its website at www.bitium.com.
Aug. 26, 2014 11:30 AM EDT Reads: 1,563 Replies: 1
What process has your provider undertaken to ensure that the cloud tenant will receive predictable performance and service? What was involved in the planning? Who owns and operates the data center? What technology is being used? How is it being supported? In his session at 14th Cloud Expo, Dave Weisbrot, Cloud Business Manager for QTS, will provide the attendees a look into what it takes to stand up and stand behind a highly available certified cloud IaaS.
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard.
To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
Fundamentally, SDN is still mostly about network plumbing. While plumbing may be useful to tinker with, what you can do with your plumbing is far more intriguing. A rigid interpretation of SDN confines it to Layers 2 and 3, and that's reasonable. But SDN opens opportunities for novel constructions in Layers 4 to 7 that solve real operational problems in data centers. "Data center," in fact, might become anachronistic - data is everywhere, constantly on the move, seemingly always overflowing. Networks move data, but not all networks are suitable for all data.
Over the past few years, enterprises have been moving to the cloud to streamline processes and operations. A study last year by TheInfoPro indicated that there is no sign of cloud investment slowing down – predicting an average growth rate of cloud spending of 36 percent from this year until 2016. As the Internet of Things continues its march to the mainstream, organizations have more opportunities to expand relationships with customers and partners by building and offering new services. These services have the potential to exponentially drive revenue and create business value.
The question is, what do CIOs need to do to make sure that their companies can take advantage of this potential? The first step is to look at their existing technical infrastructure to ensure that it can truly enable companies to drive change. One crucial component: security, including identity and access management.
14th International Cloud Expo, held on June 10–12, 2014 at the Javits Center in New York City, featured three content-packed days with a rich array of sessions about the business and technical value of cloud computing, Internet of Things, Big Data, and DevOps led by exceptional speakers from every sector of the IT ecosystem. The Cloud Expo series is the fastest-growing Enterprise IT event in the past 10 years, devoted to every aspect of delivering massively scalable enterprise IT as a service.
The old monolithic style of building enterprise applications just isn't cutting it any more. It results in applications and teams both that are complex, inefficient, and inflexible, with considerable communication overhead and long change cycles.
Microservices architectures, while they've been around for a while, are now gaining serious traction with software organizations, and for good reasons: they enable small targeted teams, rapid continuous deployment, independent updates, true polyglot languages and persistence layers, and a host of other benefits.
But truly adopting a microservices architecture requires dramatic changes across the entire organization, and a DevOps culture is absolutely essential.
Is it just me, or has there been an explosion of buzz words lately? Don’t get me wrong, the IT industry innovates at a crazy pace normally, but it seems that things have been evolving faster than ever and that a fundamental change in the way things are done is underway. We can attribute this change to one thing: the cloud. Cloud computing is by no means new, but in 2014 it has come into its own.
Cloud computing is accelerating disruption by changing how data centers deploy, develop and consume everything from software and hardware, to how they offer products and services to their customers.
Let’s take a look at a few of these hot technologies and why you’ll be adopting some of them, whether you realize it now or not.
The world's largest and most successful private cloud operations are revolutionizing their approach to demand management. These organizations have recognized that while self-service portals are a component in the overall cloud architecture, these tools do not enable demand management. In fact, in many cases the portals and end-user interfaces don't actually capture anything to do with demand, but instead force the user to enter the capacity "supply" requirements that they think will meet their demands. This is very different. Large enterprises have recognized the need to look beyond immediate requests to also model the "pipeline" of new demands that will be coming down the road. It is only by capturing new immediate requirements, an understanding of the pipeline and what is running in environments that organizations can possibly hope to accurately model demand and properly allocate compute, storage and network resources.
Big Data, the cloud, and mobile are converging technology trends that represent real opportunities for developers and IT pros to deliver more efficiencies and new value. On June 11, 2014, at the 14th International Cloud Expo Microsoft Cloud delivered a complete education track as part of Microsoft's strategy on how to deliver an integrated and yet open platform centered in cloud that also enables to extend existing investments. For developers and architects, this meant using familiar tools with enhanced capabilities never seen before. In the past 12 months Microsoft has offered tremendous innovation including: infrastructure as a service, cloud storage, cloud based device management across heterogeneous devices and Big Data solutions. Cloud Expo delegates learned how they can move their businesses and careers forward by taking Microsoft's innovation to their business.
Aug. 20, 2014 06:00 PM EDT Reads: 2,083
Register and Save!
Save $500 on your “Golden Pass”! Call 201.802.3020
Silicon Valley Call For Papers Now Open
Submit your speaking proposal for
the upcoming SDDC Expo in
Silicon Valley! November 4-6, 2014]
Please Call 201.802.3021
events (at) sys-con.com
SYS-CON's SDDC Expo, held each year in California, New York, Prague, Tokyo, and Hong Kong is the world’s leading Cloud event in its 6th year, larger than all other Cloud events put together. For sponsorship, exhibit opportunites and show prospectus, please contact Carmen Gonzalez, carmen (at) sys-con.com.
Senior Technologists including CIOs, CTOs, VPs of technology, IT directors and managers, network and storage managers, network engineers, enterprise architects, communications and networking specialists, directors of infrastructure Business Executives including CEOs, CMOs, CIOs, presidents, VPs, directors, business development; product and purchasing managers.
Join Us as a Media Partner - Together We Can Rock the IT World!
SYS-CON Media has a flourishing Media Partner program in which mutually beneficial promotion and benefits are arranged between our own leading Enterprise IT portals and events and those of our partners.
If you would like to participate, please provide us with details of your website/s and event/s or your organization and please include basic audience demographics as well as relevant metrics such as ave. page views per month.
So exactly how do you kick start a DevOps strategy? For example, say your organization is tied down to a very sequential, but cumbersome Waterfall approach to software development that is wasting precious dollars and hindering productivity? In the following we’ve outlined some strategy tips that every business leader will need to consider as they start down the path of DevOps adoption.
Whatever steps your organization takes on the DevOps path of rolling out software faster and more effectively and deployment will require the support of your senior level management team. Explain the advantages of DevOps to the executive team in terms that they can easily understand. Provide an outline of how DevOps and cloud computing can save on ROI and get your new mobile application into the hands of the customer faster and more effectively with higher quality.
When we talk about the impact of BYOD and BYOA and the Internet of Things, we often focus on the impact on data center architectures. That's because there will be an increasing need for authentication, for access control, for security, for application delivery as the number of potential endpoints (clients, devices, things) increases. That means scale in the data center.
What we gloss over, what we skip, is that before any of these "things" ever makes a request to access an application it had to execute a DNS query. Every. Single. Thing.
Elasticity is hailed as one of the biggest benefits of cloud and software-defined architectures. It's more efficient than traditional scalability models that only went one direction: up. It's based on the premise that wasting money and resources all the time just to ensure capacity on a seasonal or periodic basis is not only unappealing, but unnecessary in the age of software-defined everything.
The problem is that scaling down is much, much harder than scaling up. Oh, not from the perspective of automation and orchestration. That is, as the kids say these days, easy peasy lemon squeezy. APIs have made the ability to add and remove resources simplicity itself. There isn't a load balancing service available today without this capability - at least not one that's worth having.
As enterprises work to rapidly embrace the mobile revolution, both for their workforce and to engage more deeply with their customers, the pressure is on for IT to support the tools needed by their application developers. Mobile application developers are working with a massive variety of technologies and platforms, but one trend that stands out is the rapid adoption of NoSQL database engines and the use of Database-as-a-Service (DBaaS) platforms and services to run them.
Gartner has predicted that by 2017, 20% of enterprises will have their own internal mobile app store, meaning that enterprises are deploying both commercial and custom applications to their workforce at increasing speeds. There’s no denying the massive growth in mobile applications within the enterprise.
The Internet of Things is only going to make that even more challenging as businesses turn to new business models and services fueled by a converging digital-physical world. Applications, whether focused on licensing, provisioning, managing or storing data for these "things" will increase the already significant burden on IT as a whole. The inability to scale from an operational perspective is really what software-defined architectures are attempting to solve by operationalizing the network to shift the burden of provisioning and management from people to technology.
In my first post, I discussed how software and various tools are dramatically changing the Ops department. This post centers on the automation process.
When I was younger, you actually had to build a server from scratch, buy power and connectivity in a data center, and manually plug a machine into the network. After wearing the operations hat for a few years, I have learned many operations tasks are mundane, manual, and often have to be done at two in the morning once something has gone wrong. DevOps is predicated on the idea that all elements of technology infrastructure can be controlled through code and automated. With the rise of the cloud it can all be done in real-time via a web service.
Infrastructure automation + virtualization solves the problem of having to be physically present in a data center to provision hardware and make network changes. Also, by automating the mundane tasks you can remove unnecessary personnel. The benefits of using cloud services is costs scale linea...
When Instagram was sold to Facebook in 2012, it employed only 13 people and maintained over 4 billion photos shared by its 80 million registered users.
Internally, Instagram was a small business. Externally, it was a web monster. Filling the gap between those two contradictory perspectives is DevOps.
Now to be fair, Instagram (like many other web monster properties today) has it easier than most other businesses because it supported only one application. One. That's in stark contrast to large enterprises which are, by most analyst firms, said to manage not one but one hundred and even one thousand applications - at the same time. Our own data indicates an average of 312 applications per customer, many of which are certainly integrated and interacting with one another.
Kirk Byers at SDN Central writes frequently on the topic of DevOps as it relates (and applies) to the network and recently introduced a list of seven DevOps principles that are applicable in an article entitled, "DevOps and the Chaos Monkey. " On this list is the notion of reducing variation. This caught my eye because reducing variation is a key goal of Six Sigma and in fact its entire formula is based on measuring the impact of variation in results. The thought is that by measuring deviation from a desired outcome, you can immediately recognize whether changes to a process improve the consistency of the outcome.Quality is achieved by reducing variation, or so the methodology goes.
I love The Internet of Things. You do, too, even if you don’t know exactly what it is yet. Hardly a day goes by where I don’t find a story about some awesome company creating some new awesome gadget that taps into The Internet of Things. Scrolling through these stories is like taking a peek at the world (and our homes!) three to five years down the line.
But, uh, what exactly is The Internet of Things? And why should you care?
Executives charged with building business-driven applications have an extremely challenging task ahead of them. However, the cavalry has arrived with useful tools and strategies built specifically to keep modern applications working efficiently.
We partnered with Gigaom Research to carefully grasp, and articulate, how these modern methodologies are improving the lives of IT professionals in today’s software-driven businesses. Typically, this knowledge has been so fragmented it’s been hard to find all this helpful knowledge in one cohesive area. Several blogs and research reports touch on various aspects, but what we learned from our research has been astounding.
Inarguably, the pressure is on "the network" to get in gear, so to speak, and address how fast its services can be up and running. Software-defined architectures like cloud and SDN have arisen in response to this pressure, attempting to provide the means by which critical network services can be provisioned in hours instead of days.
Much of the blame for the time it takes to provision network services winds up landed squarely on the fact that much of the network is comprised of hardware. Not just any hardware, mind you, but special hardware. Such devices take time to procure, time to unbox, time to rack and time to cable. It's a manually intensive process that, when not anticipated, can take weeks to acquire and get into place.
Back when we were doing DB2 at IBM, there was an important older product called IMS which brought significant revenue. With another database product coming (based on relational technology), IBM did not want any cannibalization of the existing revenue stream. Hence we coined the phrase “dual database strategy” to justify the need for both DBMS products. In a similar vain, several vendors are concocting all kinds of terms and strategies to justify newer products under the banner of Big Data.
What if you could deploy a new IT service shortly after you defined the requirements? And, just imagine the bliss, if your IT spend could directly translate into a competitive advantage. Predicting the ROI would be relatively easy. You would be the envy of your peer group.
Unfortunately, as most senior executives already know, it's never that simple.
Typically, you perform the technology assessment due diligence up-front, you place your bets based upon the most compelling guidance, and then you closely monitor the results. It's an iterative process, where confidence builds over time. Maybe that's why new business technology spending tends to be aligned with a past success.
But this procurement model doesn't adapt very well in response to unanticipated significant market events or the rapid acceleration of unplanned technology migrations. Moreover, tight budgets and other resource constraints can severely limit an organization's ability to react quickly to changing environments...
Think of a cloud provider. I’d bet that for the majority of people reading this article, the first that comes to mind is AWS. Amazon Web Services were a trailblazer in the cloud space, and they still lead adoption rates at all levels of the market, from SMBs to multinationals. In some ways that’s great: Amazon constantly innovate and refine their product. But, at the same time, it’s not entirely healthy for a market to be completely dominated by one vendor. Google’s Compute Engine is snapping at Amazon’s heels, but ideally we’d like to see a flourishing market with many competitors. A market in which the word “cloud” doesn’t immediately bring one vendor to mind.
User Experience (UX), in networking is a tricky thing. It’s not just about the direct user interaction of a particular feature or of a particular product. Over at Packet Pushers, we see many blog entries reviewing network products. Time and time again, they show us that UX encompasses something much broader: It’s the experience of how well the vendor delivers the product, not just the product itself. Vendors must consider the user’s experience from the first interactions with the company, to the unboxing of the product, the ease of finding and consuming relevant documentation, through the actual support process.