Untitled Document
  Home
Conference Info
Sponsors
Exhibitors
  Call For Papers
Untitled Document
2014 West Gold Sponsors


Untitled Documen<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/> <title>Untitled Document
2014 West Bronze Sponsors

Internet of @ThingsExpo
Internet of @ThingsExpo

Untitled Document
2014 West Exhibitors

DevOps Summit

DevOps Summit

Untitled Document
2014 West Power Panel Sponsor

Untitled Document
2014 West Mobile App Sponsor

Untitled Document
2014 West
Media Sponsor

Untitled Document
2014 Diamond Sponsor

Untitled Document
2014 East Platinum Plus Sponsors

Untitled Document
2014 East Platinum Sponsors

DevOps Summit

Untitled Document
2014 East Gold Sponsor

DevOps Summit

Untitled Document
2014 East Track Sponsor

DevOps Summit

Untitled Document
2014 East Silver Sponsors

DevOps Summit
DevOps Summit

Untitled Document
2014 East Bronze Sponsors

Internet of Things Expo

Untitled Document
2014 East Exhibitors

DevOps Summit
Internet of Things Expo
DevOps Summit
DevOps Summit
DevOps Summit
WebRTC Summit
DevOps Summit
DevOps Summit

Untitled Document
2014 East
Association Sponsors

Untitled Document
2014 East
Media Sponsor

Untitled Document
2013 West Diamond Sponsor

Untitled Document
2013 West Platinum Plus Sponsor

Untitled Document
2013 West Platinum Sponsor

Untitled Document
2013 West Gold Sponsors

Untitled Document
2013 West Bronze Sponsors

Untitled Document
2013 West Exhibitors

Untitled Document
2013 West Oracle Workshop

Untitled Document
2013 West Consortium Sponsor

Untitled Document
2013 West e-Bulletin Sponsors

Untitled Document
2013 West Association Sponsors

Untitled Document
2013 West Media Sponsors

SDDC Breaking News
The old monolithic style of building enterprise applications just isn't cutting it any more. It results in applications and teams both that are complex, inefficient, and inflexible, with considerable communication overhead and long change cycles. Microservices architectures, while they've been around for a while, are now gaining serious traction with software organizations, and for good reasons: they enable small targeted teams, rapid continuous deployment, independent updates, true polyglot languages and persistence layers, and a host of other benefits. But truly adopting a microservices architecture requires dramatic changes across the entire organization, and a DevOps culture is absolutely essential.
14th International Cloud Expo, held on June 10–12, 2014 at the Javits Center in New York City, featured three content-packed days with a rich array of sessions about the business and technical value of cloud computing, Internet of Things, Big Data, and DevOps led by exceptional speakers from every sector of the IT ecosystem. The Cloud Expo series is the fastest-growing Enterprise IT event in the past 10 years, devoted to every aspect of delivering massively scalable enterprise IT as a service.
Is it just me, or has there been an explosion of buzz words lately? Don’t get me wrong, the IT industry innovates at a crazy pace normally, but it seems that things have been evolving faster than ever and that a fundamental change in the way things are done is underway. We can attribute this change to one thing: the cloud. Cloud computing is by no means new, but in 2014 it has come into its own. Cloud computing is accelerating disruption by changing how data centers deploy, develop and consume everything from software and hardware, to how they offer products and services to their customers. Let’s take a look at a few of these hot technologies and why you’ll be adopting some of them, whether you realize it now or not.
The world's largest and most successful private cloud operations are revolutionizing their approach to demand management. These organizations have recognized that while self-service portals are a component in the overall cloud architecture, these tools do not enable demand management. In fact, in many cases the portals and end-user interfaces don't actually capture anything to do with demand, but instead force the user to enter the capacity "supply" requirements that they think will meet their demands. This is very different. Large enterprises have recognized the need to look beyond immediate requests to also model the "pipeline" of new demands that will be coming down the road. It is only by capturing new immediate requirements, an understanding of the pipeline and what is running in environments that organizations can possibly hope to accurately model demand and properly allocate compute, storage and network resources.
Big Data, the cloud, and mobile are converging technology trends that represent real opportunities for developers and IT pros to deliver more efficiencies and new value. On June 11, 2014, at the 14th International Cloud Expo Microsoft Cloud delivered a complete education track as part of Microsoft's strategy on how to deliver an integrated and yet open platform centered in cloud that also enables to extend existing investments. For developers and architects, this meant using familiar tools with enhanced capabilities never seen before. In the past 12 months Microsoft has offered tremendous innovation including: infrastructure as a service, cloud storage, cloud based device management across heterogeneous devices and Big Data solutions. Cloud Expo delegates learned how they can move their businesses and careers forward by taking Microsoft's innovation to their business.
Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the “Internet of Things” (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, will discuss the real benefits to focus on, how to understand the requirements of a successful solution, the flow of data, and how to best approach deploying an IoT solution that will drive results.
Virtualization has almost become a no-brainer for most major companies out there. The added benefits of greater efficiency and lower energy costs are certainly excellent reasons for adopting a virtualization strategy and the vast majority of big businesses are taking advantage of it. Trailing behind them are the small businesses which, with limited resources, have slowly but steadily been catching up to their bigger competitors. For several years now, the focus has been on how server virtualization can give small businesses a boost, but many of these same companies are beginning to realize the benefits that come from storage virtualization too. According to one report, more than half of small businesses have implemented storage virtualization already, and as the rest begin to integrate it into their operations, they'll find many of the same advantages that will help them keep up with the demands of an aggressively competitive business world.
Headquartered in Santa Monica, California, Bitium was founded by Kriz and Erik Gustavson. The 1,500 cloud-based application using Bitium’s analytics, app management, and single sign-on services include bug trackers, customer service dashboards, Google Apps, and social networks. The firm states website administrators can do multiple tasks online without revealing passwords. Bitium’s advisors include Microsoft’s former CMO and the former senior vice president of strategy, the founder and CEO of Like.com, a product strategist at IBM and Oracle, Hootsuite’s CEO, and the founder and CEO of KISSMetric, among others. More about Bitium can be found on its website at www.bitium.com.
This one-hour webinar will cover the core benefits and features of up.time, including how up.time proactively monitors, alerts and reports on the performance, availability, and capacity of all physical servers, virtual machines, network devices, applications, and services. We’ll take you through the up.time Dashboards, show you how alerting and action profiles work, and dive into up.time’s deep reporting capabilities, including SLA reports. In the end, you’ll know how up.time works and what up.time can do for your company.
We all feel it, data use and growth is explosive. Individuals and businesses are consuming -- and generating - more data every day. The challenges are common for nearly all businesses operating in every industry: ingest all the data, analyze it as fast as possible, make good sense of it, and ultimately drive smart decisions to positively affect the business – all as fast as possible! The innovations in Supercomputing offer significant breakthroughs to help organizations meet these everyday challenges. What once was reserved for data-intensive scientific computing is now especially relevant to mission-critical business computing, and it’s all driven by big data.
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some years to come - growing to over 20% of the server market by 2016 according to Oppenheimer ("Cloudy With A Chance of ARM" Oppenheimer Equity Research Industry Report).
The Internet of Things (IoT) is rapidly in the process of breaking from its heretofore relatively obscure enterprise applications (such as plant floor control and supply chain management) and going mainstream into the consumer space. More and more creative folks are interconnecting everyday products such as household items, mobile devices, appliances and cars, and unleashing new and imaginative scenarios. We are seeing a lot of excitement around applications in home automation, personal fitness, and in-car entertainment and this excitement will bleed into other areas. On the commercial side, more manufacturers will embed sensors in their products and connect them to the Internet to monitor their performance and offer pro-active maintenance services. As a result, engineers who know how to incorporate software and networking into their mechanical designs will become more in demand.
Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization’s assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements? In his session at 15th Cloud Expo, Derek Tumulak, Vice President of Product Management at Vormetric, will discuss how to address data security in cloud and Big Data environments so that your organization isn’t next week’s data breach headline.
In Part 6, we dove into the Nagle algorithm – perhaps (or hopefully) something you’ll never see. In Part VII, we get back to “pure” network and TCP roots as we examine how the TCP receive window interacts with WAN links. Each node participating in a TCP connection advertises its available buffer space using the TCP window size field. This value identifies the maximum amount of data a sender can transmit without receiving a window update via a TCP acknowledgement; in other words, this is the maximum number of “bytes in flight” – bytes that have been sent, are traversing the network, but remain unacknowledged. Once the sender has reached this limit and exhausted the receive window, the sender must stop and wait for a window update.
The cloud is everywhere and growing, and with it SaaS has become an accepted means for software delivery. SaaS is more than just a technology, it is a thriving business model estimated to be worth around $53 billion dollars by 2015, according to IDC. The question is – how do you build and scale a profitable SaaS business model? In his session at 15th Cloud Expo, Jason Cumberland, Vice President, SaaS Solutions at Dimension Data, will give the audience an understanding of common mistakes businesses make when transitioning to SaaS; how to avoid them; and how to build a profitable and scalable SaaS business.
Register and Save!
Save $500
on your “Golden Pass”!
Call 201.802.3020


Silicon Valley Call For Papers Now Open
Submit
your speaking proposal for
the upcoming SDDC Expo in
Silicon Valley!
November 4-6, 2014]


Sponsorship Opportunities
Please Call
201.802.3021
events (at) sys-con.com
SYS-CON's SDDC Expo, held each year in California, New York, Prague, Tokyo, and Hong Kong is the world’s leading Cloud event in its 6th year, larger than all other Cloud events put together. For sponsorship, exhibit opportunites and show prospectus, please contact Carmen Gonzalez, carmen (at) sys-con.com.
Cloud Expo New York All-Star Speakers Included...

KAIL
Netflix

GOLDEN
ActiveState

KEMP
Nebula

BEHR
Praxis Flow

LOUNIBOS
SOASTA

CRAWFORD
AVOA

MORGENTHAL
Perficient, Inc.

COCKCROFT
Battery Ventures

HAFF
Red Hat

SHALOM
GigaSpaces

SUSSNA
Ingineering.IT

ROBERTS
BMC

VERNON
VictorOps

WILLIS
Stateless Networks

ROESE
EMC

PADIR
Progress

AMAR
MyPermissions

O'CONNOR
AppZero

BHARGAVA
JumpCloud

DEVINE
IBM

RUSSELL
IBM

MALEKZADEH
Cumulus Networks

McCALLION
Bronze Drum

NEGRIS
Yottamine Analytics

JACKSON
GovCloud Network

KAVIS
Kavis Technology

HARVEY
Chef

KAR
StrongLoop

McFARLANE
LiveOps

IVANOV
Telestax

DUNKLEY
Acision

FABLING
Esri

MATTHIEU
SKYNET.im

HILLIER
CiRBA

JACOBI
Kaazing

FALLOWS
Kaazing

Follow @SDDCExpo on Twitter


Testimonials
Great exhibits, great audience, great floor traffic, great conversations with IT leaders and folks in the channel."
TOM LAYDOS
Director, Marketing & Sales Operations at Evolve IP
 
We had a great experience! We look forward to helping the people we met at Cloud Expo build their businesses."
Cari.net TWEET
 
The 2012 Cloud Expo in NY was a great success for the Dell cloud team as we met with many customers, partners, and cloud technologists."
STEPHEN SPECTOR
Senior Product Marketing, Dell Cloud Services
 
Cloud Expo turned out to be an amazing gathering of entrepreneurs."

NISH BURKE
Product Marketing Manager, StorageCraft


Who Should Attend?
Senior Technologists including CIOs, CTOs, VPs of technology, IT directors and managers, network and storage managers, network engineers, enterprise architects, communications and networking specialists, directors of infrastructure Business Executives including CEOs, CMOs, CIOs, presidents, VPs, directors, business development; product and purchasing managers.

Download Cloud Computing Journal & Show Guide
Cloud Computing Journal
Download PDF
Cloud Expo Show Guide
Download PDF

Join Us as a Media Partner - Together We Can Rock the IT World!
SYS-CON Media has a flourishing Media Partner program in which mutually beneficial promotion and benefits are arranged between our own leading Enterprise IT portals and events and those of our partners.

If you would like to participate, please provide us with details of your website/s and event/s or your organization and please include basic audience demographics as well as relevant metrics such as ave. page views per month.

To get involved, email Lissette Mercado at [email protected].

Lastest Blog Posts
I love The Internet of Things. You do, too, even if you don’t know exactly what it is yet. Hardly a day goes by where I don’t find a story about some awesome company creating some new awesome gadget that taps into The Internet of Things. Scrolling through these stories is like taking a peek at the world (and our homes!) three to five years down the line. But, uh, what exactly is The Internet of Things? And why should you care?
Executives charged with building business-driven applications have an extremely challenging task ahead of them. However, the cavalry has arrived with useful tools and strategies built specifically to keep modern applications working efficiently. We partnered with Gigaom Research to carefully grasp, and articulate, how these modern methodologies are improving the lives of IT professionals in today’s software-driven businesses. Typically, this knowledge has been so fragmented it’s been hard to find all this helpful knowledge in one cohesive area. Several blogs and research reports touch on various aspects, but what we learned from our research has been astounding.
Inarguably, the pressure is on "the network" to get in gear, so to speak, and address how fast its services can be up and running. Software-defined architectures like cloud and SDN have arisen in response to this pressure, attempting to provide the means by which critical network services can be provisioned in hours instead of days. Much of the blame for the time it takes to provision network services winds up landed squarely on the fact that much of the network is comprised of hardware. Not just any hardware, mind you, but special hardware. Such devices take time to procure, time to unbox, time to rack and time to cable. It's a manually intensive process that, when not anticipated, can take weeks to acquire and get into place.
Back when we were doing DB2 at IBM, there was an important older product called IMS which brought significant revenue. With another database product coming (based on relational technology), IBM did not want any cannibalization of the existing revenue stream. Hence we coined the phrase “dual database strategy” to justify the need for both DBMS products. In a similar vain, several vendors are concocting all kinds of terms and strategies to justify newer products under the banner of Big Data.
What if you could deploy a new IT service shortly after you defined the requirements? And, just imagine the bliss, if your IT spend could directly translate into a competitive advantage. Predicting the ROI would be relatively easy. You would be the envy of your peer group. Unfortunately, as most senior executives already know, it's never that simple. Typically, you perform the technology assessment due diligence up-front, you place your bets based upon the most compelling guidance, and then you closely monitor the results. It's an iterative process, where confidence builds over time. Maybe that's why new business technology spending tends to be aligned with a past success. But this procurement model doesn't adapt very well in response to unanticipated significant market events or the rapid acceleration of unplanned technology migrations. Moreover, tight budgets and other resource constraints can severely limit an organization's ability to react quickly to changing environments...
Think of a cloud provider. I’d bet that for the majority of people reading this article, the first that comes to mind is AWS. Amazon Web Services were a trailblazer in the cloud space, and they still lead adoption rates at all levels of the market, from SMBs to multinationals. In some ways that’s great: Amazon constantly innovate and refine their product. But, at the same time, it’s not entirely healthy for a market to be completely dominated by one vendor. Google’s Compute Engine is snapping at Amazon’s heels, but ideally we’d like to see a flourishing market with many competitors. A market in which the word “cloud” doesn’t immediately bring one vendor to mind.
In my first post, I discussed how software and various tools are dramatically changing the Ops department. This post centers on the automation process. When I was younger, you actually had to build a server from scratch, buy power and connectivity in a data center, and manually plug a machine into the network. After wearing the operations hat for a few years, I have learned many operations tasks are mundane, manual, and often have to be done at two in the morning once something has gone wrong. DevOps is predicated on the idea that all elements of technology infrastructure can be controlled through code and automated. With the rise of the cloud it can all be done in real-time via a web service. Infrastructure automation + virtualization solves the problem of having to be physically present in a data center to provision hardware and make network changes. Also, by automating the mundane tasks you can remove unnecessary personnel. The benefits of using cloud services is costs scale linea...
User Experience (UX), in networking is a tricky thing. It’s not just about the direct user interaction of a particular feature or of a particular product. Over at Packet Pushers, we see many blog entries reviewing network products. Time and time again, they show us that UX encompasses something much broader: It’s the experience of how well the vendor delivers the product, not just the product itself. Vendors must consider the user’s experience from the first interactions with the company, to the unboxing of the product, the ease of finding and consuming relevant documentation, through the actual support process.
Kirk Byers at SDN Central writes frequently on the topic of DevOps as it relates (and applies) to the network and recently introduced a list of seven DevOps principles that are applicable in an article entitled, "DevOps and the Chaos Monkey. " On this list is the notion of reducing variation. This caught my eye because reducing variation is a key goal of Six Sigma and in fact its entire formula is based on measuring the impact of variation in results. The thought is that by measuring deviation from a desired outcome, you can immediately recognize whether changes to a process improve the consistency of the outcome.Quality is achieved by reducing variation, or so the methodology goes.
Achieving the ultimate ‘Five Nines’ of web site availability (around 5 minutes of downtime a year) has been a goal of many organizations since the beginning of the internet era. There are several ways to accomplish this but essentially a few principles apply. Web applications come in all shapes and sizes from static to dynamic, from simple to complex from specific to general. No matter the size, availability is important to support the customers and the business. The most basic high-availability architecture is the typical 3-tier design. A pair of ADCs in the DMZ terminates the connection; they in turn intelligently distribute the client request to a pool (multiple) of application servers which then query the database servers for the appropriate content. Each tier has redundant servers so in the event of a server outage, the others take the load and the system stays available.
In the five seconds it takes you to read this, 60% of your visitors just abandoned your site. Another 20% were gone before you even hit the first comma, and 30% of all them are purchasing from one of your competitors. That's what Limelight Networks "State of the User Experience" says, which pretty much falls in line with every other survey of consumers. They are, on the whole, impatient and unwilling to suffer poor performance when there is a veritable cornucopia of choices available out there on the Internet.
Application delivery, as defined as its own little corner of the network industry, has been fairly focused on assuring the performance, security and availability of applications since its inception back around 2003. Oh, the ways in which those three core tenets have been supported by application delivery controllers has evolved during that time, but always the focus was on the goal of making apps fast, secure and available. But that was then, and this is next. The new world is an application world, and it's not just having them on the Internet that counts. Applications - whether mobile or web, consumer or employee - are what enable and grow business today. The app is a requirement, a competitive differentiator, the keystone of an ecosystem around which businesses will rise and fall.
It's not the first time we've heard the statement that cloud can be too expensive and I doubt it will be the last. This latest episode comes from Alexei Rodriguez, Head of Ops at Evernote by way of Structure 2014: It is important to note that this admission - like those in the past - have come from what we call "web monsters." Web monsters are, as the name implies, web-first (and usually only) organizations who have millions (or billions) of users. Modern web monsters generally have only one application for which they are responsible, a la Evernote, Netflix, Facebook, etc...
In a previous article, we talked about “Short T’s.” We talked about how, in network engineering, the “T” is very long: Configuring a network to achieve business goals requires considerable skill and knowledge. While we set up a conceptual model in that post to talk about what “T” means in general terms, we did not discuss in detail how to articulate “T” more specifically for network engineering. In this post, we’ll explore this in a little more detail.
Go ahead. Name a cloud environment that doesn't include load balancing as the key enabler of elastic scalability. I've got coffee... so it's good, take your time... Exactly. Load balancing - whether implemented as traditional high availability pairs or clustering - provides the means by which applications (and infrastructure, in many cases) scale horizontally. It is load balancing that is at the heart of elastic scalability models, and that provides a means to ensure availability and even improve performance of applications. But simple load balancing alone isn't enough. Too many environments and architectures are wont to toss a simple, network-based solution at the problem and call it a day. But rudimentary load balancing techniques that rely solely on a set of metrics are doomed to fail eventually. That's because a simple number like "connection count" does not provide enough context to make an intelligent load balancing decision. An application instance may currently have only ...