Untitled Document
  Home
Conference Info
Sponsors
Exhibitors
  Call For Papers
Untitled Document
2014 West Gold Sponsors


Untitled Document
2014 West Bronze Sponsors

Internet of @ThingsExpo

Untitled Document
2014 West Exhibitors

DevOps Summit
DevOps Summit

Untitled Document
2014 West Mobile App Sponsor

Untitled Document
2014 West
Media Sponsor

Untitled Document
2014 Diamond Sponsor

Untitled Document
2014 East Platinum Plus Sponsors

Untitled Document
2014 East Platinum Sponsors

DevOps Summit

Untitled Document
2014 East Gold Sponsor

DevOps Summit

Untitled Document
2014 East Track Sponsor

DevOps Summit

Untitled Document
2014 East Silver Sponsors

DevOps Summit
DevOps Summit

Untitled Document
2014 East Bronze Sponsors

Internet of Things Expo

Untitled Document
2014 East Exhibitors

DevOps Summit
Internet of Things Expo
DevOps Summit
DevOps Summit
DevOps Summit
WebRTC Summit
DevOps Summit
DevOps Summit

Untitled Document
2014 East
Association Sponsors

Untitled Document
2014 East
Media Sponsor

Untitled Document
2013 West Diamond Sponsor

Untitled Document
2013 West Platinum Plus Sponsor

Untitled Document
2013 West Platinum Sponsor

Untitled Document
2013 West Gold Sponsors

Untitled Document
2013 West Bronze Sponsors

Untitled Document
2013 West Exhibitors

Untitled Document
2013 West Oracle Workshop

Untitled Document
2013 West Consortium Sponsor

Untitled Document
2013 West e-Bulletin Sponsors

Untitled Document
2013 West Association Sponsors

Untitled Document
2013 West Media Sponsors

Cloud Expo
Untitled Document
The SDDC Is Here! Now Help Push It Forward!
  Experience IT-as-a-Service at SDDC Expo West. Learn and Contribute in the heart of Silicon Valley Nov 4-6



The Software-Defined Datacenter--the SDDC--sits firmly within the universe of cloud computing. Enterprise IT has become virtualized and re-assembled over the past decade, with software now able to define everything from specific services to entire datacenters.

Among the most dynamic aspects of the cloud computing revolution is the idea of IT-as-a-Service--presented to enterprise IT as an SDDC. Enterprise IT must grapple with legacy technology from the distant past, the recent past, and acquisitions, and eliminate the numerous--and massive--data and application silos that go with it. The SDDC is a breakthrough strategy that enables an integration of legacy with the latest in cloud computing.

The SDDC debate is far from over, so join us at SDDC Expo West Nov 4-6 at the Santa Clara Convention Center in the heart of Silicon Valley to hear the latest developments, strategies, and use cases involving the SDDC.

SDDC Expo West is co-located with Cloud Expo West, and will enable you mingle with your colleagues, contribute to the discussion, and help drive this truly 21st-century feature of enterprise IT forward.

We'll see you in Santa Clara Nov 4-6!


The Top Keynotes, the Best Sessions, a Rock Star Faculty, and the Most Qualified Delegates on ANY SDDC Event!


The software-defined data center provides an agile, reliable and secure foundation for cloud, while also delivering the intelligence and control needed to create sustainable business value.
 
SDDC is a premier conference that connects a wide range of stakeholders to provide a valuable and educational experience for all.




SYS-CON's Cloud Expo drew more than 7,000 attendees at Jacob Javits Center
Benefits of Attending the THREE-Day Technical Program
  LEARN exactly why SDDC is relevant today from an economic, business and technology standpoint.
  HEAR first-hand from industry experts how to govern access to compute, storage, and network resources based on corporate IT policies.
  SEE how to control the data center.
  DISCOVER what the core components of the Software-Defined Data Center are.
  FIND OUT how to transform a traditional data center that is less flexible and costly to a cloud computing environment that is secure, virtualized and automated.
  MASTER the three building blocks of the SDDC – network virtualization, storage virtualization and server virtualization.
  LEARN what works, what doesn't, and what's next.


SDDC Breaking News
Many mid-market companies have invested significant time and resources to secure and back up their servers, client computers, data, and overall network infrastructure in what was the traditional client-server setup. Now, what were considered emerging technologies just a few years ago, cloud computing and virtualization have arrived on the scene, bringing both significant benefits and new challenges. Find out more about this transition to get the most out of your virtual environment.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
The public cloud computing model is rapidly becoming the world’s most prolific IT deployment architecture, yet it leaves many promises unfulfilled. While offering scale, flexibility, and potential cost savings, the public cloud often lacks the isolation, computing power, and control advantages of bare metal servers. Recent feedback suggests that people who adopted public cloud solutions for their elasticity and convenience are now lamenting their “simple” solution’s complexity. To deploy enterprise solutions with the public cloud, one must consider redundancies as a safety net for outages and other disasters, as well as more intricate network architecture for true interoperability.
The widespread adoption of tablets and mobile devices in schools allows for tracking of key performance indicators that can help gauge the impact of specific initiatives on a student's future success. While skeptics argue against ethical concerns raised by such techniques, several college campuses are already testing the use of big data. Behavioral economists theorize that "when presented with many options and little information, people find it difficult to make wise choices." This is the root of the movement to incorporate Big Data into the classroom. By compiling student data from an early age, we are able to better understand learning processes and identify issues, ultimately resulting in wiser, more informed decisions. Now when a teacher threatens that "this will go down on your permanent record," they really mean it.
According to Gartner, Big Data refers to "high volume, high velocity, and/or high variety information assets” – and, this is the key – “that require new forms of processing to enable enhanced decision making, insight discovery and process optimization." While Big Data may seem like an invaluable tool that all security teams should try to leverage, it is not practical for everyone to attempt to harness it on their own. Finding insight from data is rarely as simple as it seems. We are still in the early stages of the Big Data revolution, with people only now beginning to understand what is possible, and what it takes to get there. Simply investing in tools and development is not enough. The fact is security teams are still struggling to identify and respond to incidents in an effective way. The Verizon Data Breach Investigations Report of 2013 noted that outside parties, whether it be a telecom provider, credit card issuer, third-party vendor or the FBI, were responsible for 70% of data ...
This one-hour webinar will cover the core benefits and features of up.time, including how up.time proactively monitors, alerts and reports on the performance, availability, and capacity of all physical servers, virtual machines, network devices, applications, and services. We’ll take you through the up.time Dashboards, show you how alerting and action profiles work, and dive into up.time’s deep reporting capabilities, including SLA reports. In the end, you’ll know how up.time works and what up.time can do for your company.
Cloud services provider SherWeb on Tuesday announced that it has acquired ORCS Web Inc., a Windows-based managed hosting provider headquartered in Charlotte, North Carolina. OrcsWeb will continue to operate as a wholly owned SherWeb subsidiary. Other terms of the agreement have not been disclosed. OrcsWeb represents SherWeb’s third acquisition in as many years and its first in Infrastructure-as-a-Service (IaaS) – a segment in which SherWeb has recently made significant inroads. Earlier this year, SherWeb was named a Microsoft 2014 World Hosting Partner of the Year Finalist for its soon-to-be-launched Performance Cloud Servers, which promise the fastest performance on the market.
After a couple of false starts, cloud-based desktop solutions are picking up steam, driven by trends such as BYOD and pervasive high-speed connectivity. In his session at 15th Cloud Expo, Seth Bostock, CEO of IndependenceIT, cuts through the hype and the acronyms, and discusses the emergence of full-featured cloud workspaces that do for the desktop what cloud infrastructure did for the server. He’ll discuss VDI vs DaaS, implementation strategies and evaluation criteria.
It’s certainly no secret that cloud solutions have become an important and increasingly necessary part of how companies do business today. For enterprises, implementing cloud-based services can help boost productivity, enhance efficiency and reduce costs. Private cloud solutions take the cloud concept one step further: A private cloud configuration looks and acts like the public cloud, giving enterprises the same levels of speed, agility and cost savings – but in a far more secure environment in which dedicated bandwidth and security are guaranteed.
GigaSpaces Technologies on Monday announced it has completely re-architected its Cloudify offering to provide Intelligent Orchestration of applications on the cloud. With this product rewrite, the new Cloudify orchestration platform simplifies the application deployment, management and scaling experience on OpenStack, VMware vSphere and other clouds and environments. In current orchestration models, most tools focus primarily on application installation, while much of application management occurs after deployment. As a result, vast custom tool chains are often used to manually manage post-deployment processes such as monitoring and logging, leading to significant overhead, complexity and inconsistency across systems. Cloudify’s redesign provides a simple solution for managing the full application lifecycle. The new intelligent orchestration model introduces a feedback loop that automates fixes and updates without manual intervention, all with a single platform that integrates with a...
Sharing data is a cornerstone of the scientific method because it makes it possible to replicate work. That foundation is mostly absent from data science, which makes obtaining and reusing knowledge more difficult than it should be. Job postings for data scientists increased 15,000 percent between 2011 and 2012, and Gartner predicted that 63% of organizations would invest in Big Data this year. The communications, consumer, education, financial, healthcare, government, manufacturing, and retail sectors are all adopting business practices that are using data science to inform their activities and improve operations.
“Distrix fits into the overall cloud and IoT model around software-defined networking. There’s a broad category around software-defined networking that’s focused on data center, and we focus on the WAN,” explained Jay Friedman, President of Distrix, in this SYS-CON.tv interview at the Internet of @ThingsExpo, held June 10-12, 2014, at the Javits Center in New York City. Internet of @ThingsExpo 2014 Silicon Valley, November 4–6, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading IoT industry players in the world.
“Vote early and vote often.” Back in the 1920s and ’30s, when neither election technology nor oversight were as effective as they are today, and the likes of Al Capone were at work gaming the system, this phrase wasn’t a joke. It was a best practice. If you want guaranteed results, what better way than to get people to the polls early, and then repeatedly, to vote for your candidate? None of this sitting around until the end of the day, hoping that the election goes the way you want. Capone would tell you, “That’s for saps.” What does this have to do with cloud computing? All too often we see IT teams taking a “buy it and hope it works” strategy when it comes to adopting cloud-based apps. They migrate their entire user base to the cloud on faith, assuming that they can worry about performance and availability issues later, if ever. After all, everybody in the company accesses the Internet today without issues so your cloud apps should work just fine, right?
The ability to embed prediction into multiple business processes amplifies the value that predictive analytics delivers. Yet many still see predictive analytics as a separate activity that is the responsibility of a small team of expert analysts. This webinar will show how predictive analytics can be used throughout the organization by anyone looking for answers and how organizations can make predictive analytics the basis for better operational decisions with IBM SPSS Modeler.
Avere Systems has announced that it raised an additional $20 million in venture financing, bringing the total amount invested in the company to $72 million. The Series D funding round was led by Western Digital Capital, with participation from previous investors Lightspeed Venture Partners, Menlo Ventures, Norwest Venture Partners and Tenaya Capital. The funding will be used to accelerate sales, marketing and continued development of the company's hybrid cloud storage solutions.
Register and Save!
Save $500
on your “Golden Pass”!
Call 201.802.3020


Silicon Valley Call For Papers Now Open
Submit
your speaking proposal for
the upcoming SDDC Expo in
Silicon Valley!
November 4-6, 2014]


Sponsorship Opportunities
Please Call
201.802.3021
events (at) sys-con.com
SYS-CON's SDDC Expo, held each year in California, New York, Prague, Tokyo, and Hong Kong is the world’s leading Cloud event in its 6th year, larger than all other Cloud events put together. For sponsorship, exhibit opportunites and show prospectus, please contact Carmen Gonzalez, carmen (at) sys-con.com.
Cloud Expo New York All-Star Speakers Included...

KAIL
Netflix

GOLDEN
ActiveState

KEMP
Nebula

BEHR
Praxis Flow

LOUNIBOS
SOASTA

CRAWFORD
AVOA

MORGENTHAL
Perficient, Inc.

COCKCROFT
Battery Ventures

HAFF
Red Hat

SHALOM
GigaSpaces

SUSSNA
Ingineering.IT

ROBERTS
BMC

VERNON
VictorOps

WILLIS
Stateless Networks

ROESE
EMC

PADIR
Progress

AMAR
MyPermissions

O'CONNOR
AppZero

BHARGAVA
JumpCloud

DEVINE
IBM

RUSSELL
IBM

MALEKZADEH
Cumulus Networks

McCALLION
Bronze Drum

NEGRIS
Yottamine Analytics

JACKSON
GovCloud Network

KAVIS
Kavis Technology

HARVEY
Chef

KAR
StrongLoop

McFARLANE
LiveOps

IVANOV
Telestax

DUNKLEY
Acision

FABLING
Esri

MATTHIEU
SKYNET.im

HILLIER
CiRBA

JACOBI
Kaazing

FALLOWS
Kaazing

Follow @SDDCExpo on Twitter


Testimonials
Great exhibits, great audience, great floor traffic, great conversations with IT leaders and folks in the channel."
TOM LAYDOS
Director, Marketing & Sales Operations at Evolve IP
 
We had a great experience! We look forward to helping the people we met at Cloud Expo build their businesses."
Cari.net TWEET
 
The 2012 Cloud Expo in NY was a great success for the Dell cloud team as we met with many customers, partners, and cloud technologists."
STEPHEN SPECTOR
Senior Product Marketing, Dell Cloud Services
 
Cloud Expo turned out to be an amazing gathering of entrepreneurs."

NISH BURKE
Product Marketing Manager, StorageCraft


Who Should Attend?
Senior Technologists including CIOs, CTOs, VPs of technology, IT directors and managers, network and storage managers, network engineers, enterprise architects, communications and networking specialists, directors of infrastructure Business Executives including CEOs, CMOs, CIOs, presidents, VPs, directors, business development; product and purchasing managers.

Download Cloud Computing Journal & Show Guide
Cloud Computing Journal
Download PDF
Cloud Expo Show Guide
Download PDF

Join Us as a Media Partner - Together We Can Rock the IT World!
SYS-CON Media has a flourishing Media Partner program in which mutually beneficial promotion and benefits are arranged between our own leading Enterprise IT portals and events and those of our partners.

If you would like to participate, please provide us with details of your website/s and event/s or your organization and please include basic audience demographics as well as relevant metrics such as ave. page views per month.

To get involved, email Lissette Mercado at [email protected].

Lastest Blog Posts
Application delivery is always evolving. Initially, applications were delivered out of a physical data center, either dedicated raised floor at the corporate headquarters or from some leased space rented from one of the web hosting vendors during the late 1990’s to early 2000’s or some combination of both. Soon global organizations and ecommerce sites alike, started to distribute their applications and deploy them at multiple physical data centers to address geo-location, redundancy and disaster recovery challenges. This was an expensive endeavor back then even without adding the networking, bandwidth and leased line costs.
When people talk about Big Data, the emphasis is usually on the Big. Certainly, Big Data applications are distributed largely because the size of the data on which computations are executed warrants more than a typical application can handle. But scaling the network that provides connectivity between Big Data nodes is not just about creating massive interconnects. In fact, the size of the network might be the least interesting aspect of scaling Big Data fabrics.
Much has been published about the Open Compute Project. Initiated by Facebook, it has become an industry effort focused on standardization of many parts and components in the datacenter. Initially focused on racks, power and server design, it has also added storage and now networking to its fold. Its goal is fairly straightforward: “how can we design the most efficient compute infrastructure possible”, a direct quote from its web site. The focus of OCP has been mostly around hardware designs and specifications. If you look at the networking arm of OCP, you find several Top of Rack (ToR) ethernet switch hardware designs donated by the likes of Broadcom, Mellanox and Intel. By creating open specifications of hardware designs for fairly standard ethernet switches, the industry can standardize on these designs and economics of scale would drive down the cost to create and maintain this hardware. A noble goal and there are many opinions on both sides of this effort. Mostly referred to as...
Some people are never satisfied. These fearless agents of change are everywhere. They're informed, confident and willing to experiment. They seek out the best business technology solution for the job at hand. They act on instinct. Yes, you could say that they're driven. However, they're also at risk of being labeled as "rogue employees" because they ordered a software-as-a-service (SaaS) offering and perhaps expensed it without prior approval. Sometimes they're the champion of progressive projects that are referred to as Shadow IT -- intentionally bypassing their company's formal evaluation and procurement process. How can this happen? Is it just because their activities are tolerated, or are they being encouraged? If so, by whom? Why would any business leader applaud a team member that breaks the rules? Maybe, the simple answer is that staying within the confines of the status-quo won't enable a top-performer to fully apply their talent, achieving their absolute best.
One of the primary principles of object-oriented programming (OOP) is encapsulation. Encapsulation is the way in which the state of an object is protected from manipulation in a way that is not consistent with the way the variable is intended to be manipulated. The variable (state) is made private, that is to say only the object itself can change it directly. Think of it as the difference between an automatic transmission and a standard (stick). With the latter, I can change gears whenever I see fit. The problem is that when I see fit may not be appropriate to the way in which the gears should be shifted. Which is how engines end up being redlined.
Last week Greg Ferro (@etherealmind) wrote this article about his experience with scripting as a method for network automation, with the ultimate conclusion that scripting does not scale. Early in my career I managed a small network that grew to be a IP over X.25 hub of Europe for a few years providing many countries with their first Internet connectivity. Scripts were everywhere, small ones to grab stats and create pretty graphs, others that continuously checked the status of links and would send emails when things went wrong. While it is hard to argue with Greg’s complaints per se, I believe the key point is missing. And it has nothing to do with scripting. In a reply, Ivan’s last comment touches on the real issue.
Cloud computing has finally come into its own. While we’ve been hearing for 8 years or more that cloud computing would one day take over the enterprise, the fact of the matter is that it’s been slow going. While the spread of cloud computing solutions hasn’t been as rapid as many early proponents predicted it would be, we are now to a place where cloud solutions are seen as viable for most organizations, and are being utilized regularly.
It's an application world; a world that is rapidly expanding. With new opportunities and markets arising driven by mobility and the Internet of Things, it is only going to keep expanding as applications are deployed to provision, license, and manage the growing sensors and devices in the hands of consumers. Applications are not isolated containers of functionality. No application winds up in production without a robust stack of resources and services to support it. Storage and compute, of course, are required, but so are the networking - both stateless and stateful - services that provide for scale, security and performance.
The challenge in architecting, building, and managing data centers is one of balance. There are forces competing to both push together and pull apart datacenter resources. Finding an equilibrium point that is technological sustainable, operationally viable, and business friendly is challenging. The result is frequently a set of compromises that outweigh the advantages. The datacenter represents a diverse set of orchestrated resources bound together by the applications they serve. At its most simplest, these resources are physically co-located. At its extreme, these resources are geographically distributed across many sites. Whatever the physical layout, these resources are under pressure to be treated as a single logical group.
Despite the hype and drama surrounding the HTTP 2.0 effort, the latest version of the ubiquitous HTTP protocol is not just a marketing term. It's a real, live IETF standard that is scheduled to "go live" in November (2014). And it changes everything. There are a lot of performance enhancing related changes in the HTTP 2.0 specification including multiplexing and header compression. These are not to be overlooked as minimal updates as they significantly improve performance, particularly for clients connecting over a mobile network. Header compression, for example, minimizes the requirement to transport HTTP headers with each and every request - and response. HTTP headers can become quite the overhead, particularly for those requests comprised simply of a URL or a few bytes of data.
One of the benefits of SDN is centralized control. That is, there is a single repository containing the known current state of the entire network. It is this centralization that enables intelligent application of new policies to govern and control the network - from new routes to user experience services like QoS. Because there is a single entity which has visibility into the state of the network as a whole, it can examine the topology at any given point and make determinations as to where this packet and that should be routed, how it is prioritized and even whether or not it is allowed to traverse the network.
If LinkedIn profiles are any indication, User Experience (frequently shortened to UX) is the new orange. Indeed, across all manners of technology, there is an increasing focus on improving user experience. Driven in part by Apple’s success on the consumer side, it would appear that IT infrastructure vendors are getting in on the action. In the quest to simplify our collective lives and differentiate in a space more defined by cost than capability, the user is taking a more prominent role. As it should be.
In the video at this link and embedded below I provide some context on new approaches to data can enhance outcomes for public sector organizations, with a focus on real world use cases. I also mention key requirements which apply at most government organizations for their data and how organizations are addressing their unique requirements with technology provided by Cloudera:
Inarguably one of the drivers of software-defined architectures (cloud, SDDC, and SDN) as well as movements like DevOps is the complexity inherent in today's data center networks. For years now we've added applications and services, and responded to new threats and requirements from the business with new boxes and new capabilities. All of them cobbled together using traditional networking principles that adhere to providing reliability and scale through redundancy.
Some people believe good or bad things always happen in threes. I believe you will always be able to find three (and probably more) things that are good or bad and somewhat related, but sometimes I get surprised by the apparent coincidental appearance of several closely related “things”. Last week the folks at networkheresy.com posted a second installment of their “policy in the datacenter” discussion, Cisco announced the acquisition of tail-f and internal to Plexxi we had several intense architectural discussions around Configuration, Provisioning and Policy management. Maybe we can declare June CP&P month for networking. It is mostly accepted that configuration deals with the deployment of devices and applications within an infrastructure. For network devices, it covers the portions of creating a fabric, protocols to maintain this fabric, access and control to the device itself, management connectivity etc. Once a network device is configured, it is a functioning element in a networ...