Archive by Author

Open Source Business Models

17 Nov

Whenever we’re asked about Kaltura’s business model, we find ourselves launching into a broader conversation about the different business models used by open source companies today.  Given the frequency of this conversation, we thought we’d write it down… both for those who are thinking of open sourcing their code and for those pondering whether to work with an open source company.  This is part of our larger effort to define “Launching an Open Source Project/ Company 101” and to catalog the variety of models and companies emerging in this space. We are particularly interested in Companies who fully own the copyright to their software, yet decide as a matter of strategy to release their code under an open source license.

The first fundamental question to ask when assessing which business models may apply to an open source company is: who owns the copyright for the code?  The open source ecosystem breaks down between companies that own the copyrights to their code and have the freedom to release that code under any license, and those who adopt existing code and have no choice over licenses.

  1. Options for companies who do not own the copyright to their code (Red Hat, Zend, JasperSoft):
    1. Services Model – according to this model, a Company sells maintenance, support, documentation, and training services, as well as certification of the software version, in conjunction with the open source software. Customers pay for the peace of mind of having the code tested, certified and maintained by the services company. (This is the basic Redhat model.)
    2. Software as a Service – according to this model, the open source project serves as a foundation for a SaaS offering. In a SaaS model, customers pay for the hosting, streaming, and delivery of the software on a managed cloud, regardless of the license of underlying software (Acquia Model).
    3. Proprietary plugin/application model – according to this model, a company sells premium commercial add-ons, modules, and applications in conjunction with the open source software and then packages both the underlying code and the apps together.  (Jaspersoft is an example here)
  2. Business models available for companies who own the copyright to their code and have decided to release the code under an open source license (examples: Kaltura; MySQL, Instructure)  (We’ll explain why they might decide to open source code another time)
    1. First, note that all of the business models available for companies who do not own the code are also available to this second set of companies as well: ie. Services model, SaaS, and Proprietary Add-ons/ Apps.
    2. Dual licensing – according to this model, a company releases the code they own under a standard commercial license as well as under an Open Source License. Whereas the open source version is usually free, the commercial option often comes with a standard licensing fee. (This option can exist, with or without feature parity between the open source and commercial versions.)
    3. Freemium Model – according to this model, a company releases software under an open source license and sells premium features on top.   Unlike in plugin/app model, here a company need not create an entirely separate module or plugin. The owners of the code can just chose not to release certain features.

We hope this gives you a basic breakdown of the options. In the next post we’ll explore the advantages of releasing code under and open source license and creating an open source project.

Do Successful Startups Come in 3s (or more)?

8 Nov

When Cyota started more than12 years ago, it was developing a solution to provide online shoppers with a single use online credit card number, so that their primary card would never be revealed online.

It sounded like an extremely innovative idea at the time, particularly because e-commerce was not well developed yet, and shoppers were very concerned.

Shockingly – there were 3 different companies (1 out of Israel, 1 out of India, and 1 out of Ireland) who were developing the exact same solution at the exact same time.

Indeed it seems like good startups, like good ideas, come in threes, or even fours.

In today’s celebrity culture, where we hold up the start-up founder or patent registrant as superstar, visionary, and inventor extraordinaire, the concept might seem surprising.  But for those studying the history of innovation, simultaneous invention is now the norm.

As Stephen Johnson notes in Where Good Ideas Come From:

One of the most remarkable patterns in all of intellectual history [is] what scholars now call “the multiple:” A brilliant idea occurs to a scientist or inventor somewhere in the world, and he goes public with his remarkable finding, only to discover that three other minds had independently come up with the same idea in the past year.  Sunspots were simultaneously discovered in 1611 by four scientists living in four different countries… The law of conservation of energy was formulated separately four times in the late 1840’s. (34)

Given the Cyota experience, we looked through the history of Internet startups to see if a similar trend holds, and it indeed it seems it does.

Early Search Engines Altavista, Ask.com, Yahoo Search
Social Networking Friendster, MySpace, Facebook
Social Bookmarking Delicious, Digg, Reddit
Online Video Editing Jumpcut, EyeSpot, Cuts
Instant messaging Powwow, ICQ
Video Sharing YouTube, Metacafe, Daily Motion
Webmail Systems Hotmail, Yahoo mail, AOL mail
Video Advertising Tremor-media, AdapTV, Yume
Shopping Comparison Dealtime, My Simon, ShopSmart

Great startups, like classic inventions, represent the product of networked individuals collectively identifying real needs in the world and simultaneously deciding to address these needs.

So, startup founders and entrepreneurs, next time an investor asks you to do a market analysis, take a real look around for those competitors. If you find them, you should see it as a sign of your brilliance and a foreshadowing of good things to come.

But more importantly, the notion that startups come in 3s has profound implications for how we cultivate innovation in entrepreneurial communities and the amount of weight we put on the good idea relative to other indicators of startup success.

Selling Our Wireless Future

7 Nov

One of the reasons we started this blog is to discuss some of the important ideas coming out of academia with the broader tech community.  Too often these ideas get lost – somewhere between the tomes of journals that fill university libraries and the tech news that fills the blogosphere.

‘Activism’ by the entrepreneurial tech community is a funny thing. While we are often quick to mobilize around important world events like the Arab Spring, Wikileaks, or Occupy Wallstreet, when it comes to tech policy matters that might jeopardize the open web on which we depend, there is often silence. The reason for this is a conversation for another day (thought one will get some insight if you read the comments on Fred Wilson’s AVC). But we do see a quandary here – If entrepreneurs don’t act as stewards of the open web who will speak for us instead?

We leave you now with a cross/guest post from Professor Yochai Benkler, a mentor and teacher to both of us.  He explains why recent moves by the deficit supercommittee to sell unlicensed spectrum threaten the future of mobile technology in this country and the ability of entrepreneurs to innovate freely in the mobile space.  

——

As the deficit supercommittee searches every corner to make budgetary ends meet, one solution they are considering, “incentive auctions” of the TV bands, could threaten the future of wireless innovation. These auctions may lock in an outdated regulatory paradigm, strengthen the dominant mobile broadband carriers, and block the path for some of the most innovative wireless technologies that could improve mobile broadband speed and reduce its price over the next decade. In return, the revenue they will raise is a very modest 1.5 percent of the 1.6 trillion dollar package. The auctions would trade off a small short-term revenue gain for less growth and innovation over the coming decade.

The proposed spectrum auctions are being promoted under the false premise that boosting mobile broadband, smart grid communications, inventory management systems, mobile payments, and health monitoring requires auctioning exclusive pieces of licensed spectrum. In reality, these markets are fast developing through unlicensed wireless applications, like WiFi. When the iPhone crashed AT&T’s mobile broadband capacity, the company didn’t buy more spectrum on secondary markets; it used WiFi to carry much of the data. In the past year WiFi traffic on AT&T’s hotspots has tripled. Today, about half of iPhone and 90 percent of iPad page views are carried over WiFi. Indeed, almost two-thirds of all smartphone and tablet data traffic is carried over WiFi rather than over the carriers’ networks, whose hunger is driving the demand for auctioning TV bands. In Japan, a good place to see the near future of mobile broadband, the second largest mobile carrier contracted a California firm to roll out 100,000 hotspots as a core strategy for its next generation mobile broadband network.

But it’s not only mobile broadband. When you use your E-Z Pass at a toll booth or Speedpass at the gas station, you use unlicensed technology like WiFi, but in a different band. When Wal-Mart moved its field-defining inventory management system to the next generation, it used technology that uses spectrum on the same principle: unlicensed wireless. Almost the entire market for inventory management and access control is now driven by unlicensed wireless technologies. Almost seventy percent of U.S. Smart Grid communications market is served by firms that use WiFi and similar technologies, and by a one recent account, about eighty percent of the wireless market in the healthcare sector depends on an array of unlicensed strategies. These dynamic markets are telling us something new: The future of wireless will likely be mostly unlicensed, with an important, but residual role of auctioned, licensed services. And yet the drive to auctions simply ignores the evidence from actual markets in favor of an outmoded regulatory ideal that is the opposite of what cutting edge radio engineering and dynamic markets show.

Most of these applications were developed using junk bands, where regulators dumped industrial equipment and microwave ovens. They thrived even in these harsh conditions, but in an effort to open up new, less wasteland-like areas for these dynamic, innovative technologies, the last Republican and current Democratic FCC chairs presided over the bipartisan creation of TV White Spaces, a policy that permits device manufacturers to expand the capabilities of unlicensed devices by sharing the TV bands with broadcasters. The TV Band auctions being pushed through the supercommittee threaten to displace these white space devices. As we look at the enormous success of unlicensed wireless strategies across the most dynamic markets, we see that doing so is penny wise, pound foolish.

Not only will auctions burden development of unlicensed strategies, if the last major auction is any indication, they will allow AT&T and Verizon to foreclose competition in their markets. When AT&T argued in defense of its T-Mobile merger, it said that T-Mobile wasn’t much of a competitor “without the spectrum to deploy a 4G LTE network.” But the reason T-Mobile lacks that spectrum is that Verizon and AT&T already own 78 percent of the spectrum bands needed. The new auctions would extend Verizon and AT&T’s foreclosure to the TV Bands as well, constraining not only competitors like T-Mobile, but the whole field of unlicensed strategies as well.

As a revenue source, spectrum auctions are a particularly pernicious tax on wireless innovation. They pick the wrong technology for wireless infrastructure by regulatory fiat, and strengthen the market dominance of already-dominant players. The costs of this policy to innovation and growth greatly outweigh its revenue benefits, and the supercommittee simply does not have the time to learn enough to avoid doing more harm than good.

United States Frequency Allocations http://en.wikipedia.org/wiki/Frequency_allocation

Is Ignorance Essential For Start-up Success?

4 Nov

Blissful Ignorance – the Start-up Paradox…

Within entrepreneurial communities, it’s often the prevailing wisdom that innovation comes from industry experts that jump outside of the mainstream, team with technologists, and return with a new company or product to disrupt their existing industry.

Is this correct?

It seems to us that when starting a new startup in a particular industry, ignorance about that industry provides an advantage and raises the likelihood of starting a successful company. Having deep knowledge and familiarity with an industry serves to de-motivate innovation.

Familiarity with an industry means that you know all path dependencies, existing technological challenges, structural constraints, competitors, and hurdles, all of which are a good reason NOT to start a new company.

Hence, knowledge, which is indeed power, might on some occasions be too much of a good thing: in a competitive and entrepreneurial environment it can act as a deterrent for innovation.

As Stephen Johnson has emphasized, a leap in the dark based on a hunch and intuition might prove a better strategy than risk aversion based on (what may seems to be) perfect knowledge.

Consider Zappos’ entry into the clothing business, IndieGoGo’s entry into online loans, Cyota’s entry into the Security and Anti-Fraud market, or even General Assembly’s entry into education. None of the founding teams of these companies came from the industries in which they currently operate. And few even launched their companies with a focus on these industries.

The impact of knowledge on deterring innovation is part of the reason why companies who are active in particular industries for a long time look outside to capture innovation.

Outside innovation can be achieved in different ways:

  1. acquisitions
  2. building a platform that supports external development of applications/ plugins  (consider the iPhone app store or Google Apps)
  3. releasing software under as an open source project to encourage external innovation (consider Google’s development of Android)

We’d be interested to get your feedback.

Who is more likely to start a successful startup?

1.       An industry veteran?

2.       Ignorant outsider?

Why Digital Literacy? A bit of thought analysis…

2 Nov

As we’ve begun writing and advocating for greater investment in digital literacy we’ve challenged ourselves to step back and think, does digital literacy really matter?   Should all young people really be forced to take a programming class? Should teachers teach the basics of Internet architecture?

For some, the answer is a resonant yes.  But, it’s not obvious.

We don’t, for example, teach young people how to build houses or cars at school, though it’s clear that both physical architecture and the basics of combustion engines impact our every day lives.  We don’t even teach many young people the basics of the stock market, investing, or simple money management.

Why then is digital literacy different?  Is it different? Or is this just another case of tech-exceptionalism….

As we’ve explored the blogosphere it’s become clear that those who advocate for digital literacy are motivated by many different visions and world views. They also attack the challenge from different angles.

For many policymakers and professional training advocates, digital literacy is about empowering the next generation of workers and students with the skills needed to compete and add value in today’s market.

For Mozilla and other free speech advocates, this drive is about creating a “web-literate planet.” It’s about enabling anyone and everyone to understand what’s “under the hood” on the Net and empowering individuals to build upon, understand, and manipulate the “operating system” of our lives.

And for many start-up community advocates, digital literacy is seen as a basic pre-requisite for managing programmers and cultivating the next generation of entrepreneurs.

Yesterday, Fred Wilson, a New York VC who is starting to invest in education projects like Code Academy and Skillshare, wrote a post on the importance of basic coding skills. Therein he posted a striking quote from media theorist Douglas Rushkoff.

Of everything I’ve read, Rushkoff provides one of the more eloquent and compelling justifications for investing in digital literacy:

When human beings acquired language, we learned not just how to listen but how to speak. When we gained literacy, we learned not just how to read but how to write. And as we move into an increasingly digital reality, we must learn not just how to use programs but how to make them. In the emerging, highly programmed landscape ahead, you will either create the software or you will be the software. It’s really that simple: Program, or be programmed.

We’ll be refining our own views and justifications over the coming months, but for those fighting the good fight on digital literacy today, hopefully these thinkers both inspire and put a few more arguments in your arsenal.

Many Faces of Open (infographic)

31 Oct

At the Open Video Conference in NYC this fall we had a discussion about the dimensions of “openness.” This infographic provides a summary of that conversation. Thought it was a nice way to start off this blog.

I recommend you click on the image to enlarge it. Enjoy…

visualsforchange.com

 

Re-Framing the NYTimes Edtech Messaging

27 Oct

In a recent article in the NYT, Matt Richtel shared the story of the Waldorf School in  Sillicon Valley, which avoids use of any technology as a teaching aid. The writer highlighted this fact in order to consider whether indeed education technology is an effective teaching aid and supports better learning results. We believe the discussion needs to be better framed. The growing onslaught of anti-edtech articles is confusing the edtech debate and harming both industry and students.  Here are the key issues, as we see them:

1. Parents’ aversion to screens and digital devices.

This is the true topic of the NYT article. The Waldorf “trend” relates to parents’ concerns about childrens’ over-exposure to the “digital screens.” Parents are concerned about their kids watching TVand spending endless time with devices.  This topic has nothing to do with education technology, though eliminating technology in schools can be one answer to this fear.  Yes, one can have schools with no edtech – but such schools are ignoring the reality of our digital life.

2. Effectiveness of Edtech in schools.

This is an important topic that requires a deep assessment of the different technologies used by schools. Edtech should be used as an efficient, interactive means of delivering high quality, personalized educational content. Edtech does not replace the need for high quality teachers, though it may standardize the level of content delivered to students.

3. The use of Edtech to achieve Digital Literacy.

This topic relates to the use of edtech in enabling digital and technology literacy. It is our firm belief that edtech should be used to promote exposure to new, exciting technologies and digital communications tools. However it is also important that students do not use technology as though it were a “black-box”. In the same way students are encouraged to analyze digital works, schools should “open” up new technologies and discuss their composition.

What level of digital literacy should we expect students to obtain within schools? This is the main topic of our upcoming post.

Edtech Investment is Essential For Tech Literacy

20 Oct

The discussion about investment in educational technology should focus not only on learning results but also on its contribution to the digital literacy of future generations.

As school districts across the US enter 2012 planning, there is an increasing debate about the value of educational technology investments. For some, “edtech” is a way to reduce costs and increase operational efficiencies in schools. Others express an almost messianic faith that edtech investment will support teachers and drive learning results in core academic subjects – reading, math, science.

On the learning results front, the latest news looks bleak.  As the New York Times recently reported, education technology companies and advocates have grossly inflated the software report card.

In a nutshell, “schools are spending billions on technology, even as they cut budgets and lay off teachers, with little proof that this approach is improving basic learning. This conundrum calls into question one of the most significant contemporary educational movements.”

Yet, educational investment is about more than just spurring existing learning goals through new means. Classrooms of the future will play an essential role in ensuring that the next generation of students acquires the basic digital skills needed to succeed in today’s world- from computer programming and online research to analysis of social data and basic web publishing. Furthermore, in a country that continues to suffer from a great digital divide, the classroom remains one of the few places where students who lack computers, broadband connections, or smart phones at home can reliably access these tools.

We are now in the midst of an economic recession, with increased uncertainty upon us. While significant numbers of high school and college graduates live in areas where job opportunities just don’t exist, many more workers are unnecessarily unemployed because there remains a tremendous mismatch between skills and market needs.

It’s time we in the US acknowledge the digital skills gap and define a “Minimum Standard of Digital Literacy,” a standard that every high school and college student should reach by graduation. 

Should we demand that every high-schooler know how to write a basic computer program? Should they know how to analyze a Wikipedia article and decipher it’s sources? What about editing a video or deconstructing a commercial?  We will have to work to define the standard, and there is room to debate the balance of hard computer skills vs. critical thinking abilities.  But the need is there.  Once we establish the Minimum Standard we will then have a rubric from which to design new edtech products, investment in infrastructure, assess student performance, and analyze the value of our tech investments.

We think that the digital literacy of children is an essential frame through which the ROI on edtech investments should be assessed.  In future posts we will propose a Minimum Standard of Digital Literacy and look forward to collectively debating and agreeing on this standard.

Open Platform vs. Open Content. The Big Debate at Educause 2011!

19 Oct

In listening to the announcements by both Pearson Education Inc. and Blackboard Inc. this week, we were intrigued thinking through which might have more appeal to educational institutions.

1. an open and potentially free learning platform – this is what Pearson is suggesting, leaving Pearson to focus on its core content business.

2. a proprietary learning platform through which educators and institutions will be allowed to share content under a creative commons license, allowing Blackboard to focus on its core proprietary software business. Blackboard’s suggestion.

There are several challenges to both models:

First, in order for the Pearson platform to be truly open, it would need to guarantee its long term openness, ideally by making the platform available under an open source license. We also wonder about the extent to which institutions, instructors, and even other content companies will agree to allow Pearson to play an even more central role in the educational content game.If the Openclass vision rings true, Pearson will become not only the dominant provider of educational content but also the central “content filter” for education.  If Openclass were truly open, there would be limited risk as other content providers could compete as well without risking potentially harmful taxes or filtering. If Openclass is not truly open, then there is room for concern.

Blackboard, by contrast, is interested in maintaining the proprietary nature of the platform (its main business). It assumes that an open content model would have broad appeal, adding to the value of the Blackboard platform and making it all the more “sticky.”

Whether schools will trust Blackboard as a vehicle for content sharing is still an open question. If the content will be truly open, then schools can gain a valuable repository of open content that can be used outside of Blackboard as well. This would create a tremendous public asset.

So the contest begins: Open Platform vs. Content. Two of the largest players in education have each placed their bets. Now we’ll wait and see how “open” each is really willing to be…

The Future of “Open Source” Licenses

19 Oct

Over the past decade we’ve seen increasing adoption of open source technologies, both by software companies, as part of their platform development, as well as by businesses and consumers who purchase open source products. In addition, governments and educational institutions are increasingly promoting the adoption of OS technologies.

In recent years, particularly in the start-up sector, we’ve also seen companies use what we call “Commercial Open Source” licenses. These licenses restrict the use of the software and charge consumers, yet they provide open source code access and allow companies to create derivative work.

In the wake of this rapid evolution and promotion of open source technologies, there is rampant confusion about what constitutes an “open source” project or “open source” license.   The definition is important because each type has different implications about the promotion of innovation, freedom, distribution, and use of software.  Our goal here is to promote a discussion on this topic and suggest an initial hypothesis.

What is open source?

It is currently accepted that in order for software to be defined as “open source” or “free” software, its license should not only allow access to the source code but also permit distribution that is free, unrestricted, and allows creation of derivative work. In order to be considered “open source” there needs to be a guaranty that openness cannot be revoked.

It is interesting to consider different licensing schemes and confront them with this definition.

Pure Open (MIT) and the like GPL and the like “Commercial” Open Source
Source Code is made available
Derivative work creation permitted
Ownership of Derivative Work
Restrictions on redistribution of Derivative work
No restrictions on software use and distribution
Guaranteed public access to source code

The GPL license (which allows source code access, free usage, and creation of derivative work, but insists that any derivative work, when redistributed, is also released under the GPL terms) is actually restrictive, as it requires redistribution of derivative work under a specific license. From an innovation standpoint, this license protects early developers from being “exploited” by others who build on their work but do not redistribute. Yet, in restricting later stage developers and preventing them from using the business models and licensing schemes of their choice, it inhibits downstream innovation. (see also a license to kill innovation).

Licenses such as the Apache License and the MIT license truly meet the broadest open source definition: they guarantee openness, allow for source code access and creation of derivatives, and they do not include meaningful restrictions that might inhibit the motivation of downstream developers (except for requiring attribution for example, which is not a real restriction).

Commercial Open Source licenses restrict usage based on $$ but provide source code access and permit derivative work.  These licenses would not traditionally be considered “open source” because they restrict distribution, but they do an allow companies “to build on the shoulders of giants”, which is one of the original rationales of open source licensing.  That said, under Commercial Open Source licenses, once companies develop new innovations, they cannot sell or sublicense the technology outside of the predefined commercial terms (ie. # of users, servers, etc.)

Commercial Open Source is not a broadly recognized category; however, it is an extremely interesting one from an innovation standpoint. Downstream innovators are motivated to develop derivative work. They can own it, and they can redistribute it. Still, these licenses cannot really guarantee the openness of the source code or its availability to the public. Typically the source code access in this model is subject to confidentiality.

Further analysis is needed, but thus far our hypothesis is as follows:  if the key objective is to guarantee the openness and availability of the software to the public, open source licenses are the right choice. If the key objective is to promote innovation, promoting commercial open source licenses is an extremely valuable path.