GitHub – P2P Foundation https://blog.p2pfoundation.net Researching, documenting and promoting peer to peer practices Sat, 15 May 2021 16:12:18 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.15 62076519 Censorship machines are coming: It’s time for the free software community to discover its political clout https://blog.p2pfoundation.net/censorship-machines-are-coming-its-time-for-the-free-software-community-to-discover-its-political-clout/2018/04/10 https://blog.p2pfoundation.net/censorship-machines-are-coming-its-time-for-the-free-software-community-to-discover-its-political-clout/2018/04/10#respond Tue, 10 Apr 2018 13:00:00 +0000 https://blog.p2pfoundation.net/?p=70419 Continuing our coverage of the European Parliament’s heinous proposition for filtering uploaded content, Julia Reda writes about the disturbing consequences it could have for FLOSS projects. Julia Reda: Free software development as we know it is under threat by the EU copyright reform plans. The battle on the EU copyright reform proposal continues, centering on the plan to... Continue reading

The post Censorship machines are coming: It’s time for the free software community to discover its political clout appeared first on P2P Foundation.

]]>
Continuing our coverage of the European Parliament’s heinous proposition for filtering uploaded content, Julia Reda writes about the disturbing consequences it could have for FLOSS projects.

Julia Reda: Free software development as we know it is under threat by the EU copyright reform plans.

The battle on the EU copyright reform proposal continues, centering on the plan to introduce upload filters. In short, online platforms would be required to monitor their users’ uploads and try to prevent copyright infringement through automated filtering. As most communication online consists of uploads onto different platforms, such “censorship machines” have broad consequences, including for free and open source software (FOSS) repositories.

On these platforms, developers from across the world collaborate on software projects that anyone can freely use and adapt. Automated filters would be guaranteed to throw up many false positives. Automatic deletion means uploaders are presumed guilty until proven innocent: Legitimate contributions would be blocked.

The recent outcry about this in the FOSS community is showing some results: Our concerns are getting lawmakers’ attention. Unfortunately, though, most are misunderstanding the issue and drawing the wrong conclusions. Now that we know how powerful the community’s voice is, it is all the more important to keep speaking up!

Why is this happening?

The starting point for this legislation was a fight between big corporations, the music industry and YouTube, over money. The music industry complained that they receive less each time one of their music videos is played on a video platform like YouTube than they do when their tracks are listened to on subscription services like Spotify, calling the difference the “value gap”. They started a successful lobbying effort: The upload filter law is primarily intended to give them a bargaining chip to demand more money from Google in negotiations. Meanwhile, all other platforms are caught in the middle of that fight, including code sharing communities.

The lobbying has engrained in many legislators’ minds the false idea that platforms which host uploads for profit are necessarily exploiting creators.

Code sharing

There are, however, many examples where there is a symbiotic relationship between platform and creators. Developers use and upload to software repositories voluntarily, because the platforms add value. While Github is a for-profit company, it supports not-for-profit projects – it finances its free hosting of open source projects by charging for the commercial use of the site’s services. Thus open source activities will be affected by a law designed to regulate a fight between giant corporations.

In a recent blog post, Github sounded the alarm, citing three reasons why upload filters are a terrible fit for software projects:

  1. Code needs to be filtered under this law because it is copyrightable – but many developers intend for their code to be shared under an open source license.
  2. The risk for false positives is very high because different parts of a software project may be covered under different license terms, which is very hard for automated technology to adequately handle.
  3. Automatically having to remove code suspected of infringing copyright may have devastating consequences for software developers who have built on common resources that they may find suddenly vanishing.

Concerns are being heard

In their latest draft, the Council of the European Union seeks to exclude “non-for profit open source software developing platforms” from the obligation to filter uploads. This amendment is a direct result of the FOSS community’s outcry. However, this exception would not cover for-profit platforms like Github and many others, even if only a branch of their operations is for-profit.

Rather than questioning the basic principle of the law, politicians are trying to quell criticism by proposing more and more specific exceptions for those who can credibly demonstrate that the law would adversely affect them. Creating such a list of exceptions is a Sisyphean task sure to remain incomplete. Rather, upload filters should be rejected as a whole as a disproportional measure that endangers the fundamental right to free expression online.

We can do it!

To achieve this, we need your help. The FOSS community can’t just solve problems with code: It has political clout, strength in numbers and allies in the Parliament. We have already started to effect change. Here’s how you can take action right now:

  1. Sign the open letter at SaveCodeshare.
  2. Use Mozilla’s free tool to call MEPs.
  3. Tweet at the key players in the Parliament’s Legal Affairs Committee via FixCopyright.

Technical Sidenote:

  • Fundamentally, three players are involved in the legislative process. The Commission drafted an initial legal proposal, which the European Parliament and the Council of the European Union can propose changes to. Within the Parliament, this legislation is first discussed in the Legal Affairs Committee, with each political group nominating a negotiator. Once the Committee has voted to approve the compromise established by the negotiators, it will be put to vote in the plenary of the Parliament, before negotiations begin with the other institutions. The exact legislative path so far can be found here.

To the extent possible under law, the creator has waived all copyright and related or neighboring rights to this work.

The post Censorship machines are coming: It’s time for the free software community to discover its political clout appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/censorship-machines-are-coming-its-time-for-the-free-software-community-to-discover-its-political-clout/2018/04/10/feed 0 70419
Could Sharing Research Data Propel Scientific Discovery? https://blog.p2pfoundation.net/could-sharing-research-data-propel-scientific-discovery/2018/01/28 https://blog.p2pfoundation.net/could-sharing-research-data-propel-scientific-discovery/2018/01/28#respond Sun, 28 Jan 2018 11:00:00 +0000 https://blog.p2pfoundation.net/?p=69419 Cross-posted from Shareable. Ambika Kandasamy: Cognitive neuroscientist Christopher Madan says open-access data or data that is freely shared among researchers to use in their studies can not only save time and money, it can enable scientists to “skip straight to doing analysis and then drawing conclusions from it,” if the datasets they need already exist. Madan... Continue reading

The post Could Sharing Research Data Propel Scientific Discovery? appeared first on P2P Foundation.

]]>
Cross-posted from Shareable.

Ambika Kandasamy: Cognitive neuroscientist Christopher Madan says open-access data or data that is freely shared among researchers to use in their studies can not only save time and money, it can enable scientists to “skip straight to doing analysis and then drawing conclusions from it,” if the datasets they need already exist. Madan works as an assistant professor at the University of Nottingham in England, where he studies the impact of aging in the brain, focusing specifically on memory. He started using open-access data in his work about three years ago.

Given the stiff competition for funding, scientists like Madan are turning to open-access data as a way to expedite their own research process as well as the work of others in the field. Madan says there are various benefits to using open-access data in research — namely, it provides researchers with large and diverse datasets that might otherwise be difficult to obtain independently. This pre-existing data could help them make inferences about generalizing the results of their studies to larger populations, he says. Making research data freely available, however, isn’t such a straightforward process. In some cases, especially when researchers use patient data in studies, they must take steps to anonymize it, he says, adding that “we also need to have balance, so we don’t become too dependent on specific open datasets.”

We spoke with Madan about how he uses open-access data in his research.

Ambika Kandasamy, Shareable: I attended a talk you gave at the MIT Media Lab in August about some of the benefits of sharing research material such as MRI datasets with other researchers. What compelled you to move towards this open and shared approach?

Christopher Madan: To some degree, I’m more a consumer of open data than adding to it. The main plus is that the data is already there. Instead of, I have an idea and then I have to acquire the data — both applying for grants or somehow getting the money side sorted and then having a research assistant to put in the actual time to get them — people to come in and be scanned. Scanners are kind of expensive. All of this would take, on the optimistic side, I’d say several months or more into years, if I wanted to get a sample size of like three, four hundred people.

But for the sake of just looking at age, datasets exist. It can take a few minutes to download, maybe into hours depending on which one and how much other data I have to sort through to organize it into a way that is more how I want the data organized to be analyzed. It’s still in the scale of hours and maybe days versus months to years. Then the analysis on that going forward is the same at that point.

In an article in the Frontiers in Human Neuroscience journal, you wrote that “open-access data can allow for access to populations that may otherwise be unfeasible to recruit — such as middle-age adults, patients, and individuals from other geographic regions.” Could you elaborate on that?

The maybe more surprising one of those is the middle-age adults. People in their 30s to 50s could generally have jobs and families and are busy, so it’s harder to get them to be in research studies. If we’re interested in aging, getting young adults that are effectively university student age, they’re relatively easy to be recruited in university studies because they’re walking down the halls of the same places that the research is done. Older adults, to some degree, can be easier to recruit. … But middle age adults have a lot less flexibility of their time. Even if they’re interested, they have a lot of other commitments that they have to balance. It’s just harder to get them into research studies. Now, it’s not that they’re impossible to get. It’s just effectively lower odds for that demographic. If people have already spent the effort of trying to get them in, then we should take advantage of that data and not just use it for one study and that’s it, but answer multiple research questions and try to get more out of the same data that’s already been collected.

In the article, you also mentioned that you keep a list of open-access datasets of structural MRIs on GitHub. Have other researchers contributed to this list?

Yes, they have. I initially made a list of basically just stuff that I knew. One morning, I was like, “maybe I should do this.” I was keeping track of things, but every so often, new datasets get shared. How much can you keep in your head or keep the PDFs related to these in a folder? It’s not that great of an organization. So I thought, maybe I’ll make a list where I’ll say the name — some of them have shorter abbreviations, so a spelled out version, a link to where that data actually is, a link to the paper that kind of describes it, some notes about what kind of MRIs are with it or how many individuals are included in it, the demographics — is it all young adults or old adults — that sort of information. I basically just made a list of it and put that online. Other people found it useful. Some people needed parts of that but not others, or generally didn’t think about open-access data as much until that point. Here’s a list of them. You can look up what’s there and what might be useful to you and take advantage of it.

Since then, some that I basically didn’t include, that I didn’t know of or didn’t think of or whichever, that other people are involved in, they requested to add themselves to the list, and I approved that. Other ones, people that aren’t just involved in the data collection of it, but knew of that weren’t in the list, contributed to it. It’s grown a bit since then, particularly I’ll say from other people’s additions, which also shows other people are looking at it and making a note of it. At least you can have people favorite it for later. I think it’s about 2,000 or so people have. I think maybe eight, nine people have actively added new things to it, so it’s growing a bit. Again, it is a bit of a specialized topic and resource, but other people have found it useful, so that does kind of show that it’s not just a list that I made for myself, but other people have found some benefit in this as well.

How could this kind of open-access data accelerate the process of scientific discovery?

I think the main thing is just after having some idea about what datasets exist — as soon as you have some sort of research idea and you can match it onto something of that sort — you can just download the data. In some cases you have to do an application, so maybe there’s a week or something when someone needs to approve that you’re using this for valid purposes, but you can skip straight to doing analysis and then drawing conclusions from it and writing up a research paper if it went somewhere, rather than having things be drawn out for probably several years.

From your own experience, have you noticed any trends over the years in data sharing among researchers?

There’s definitely more open data now than there used to be. That’s great, both in terms of more people using it, but also just more people sharing whatever data they’ve been collecting anyway. From more personal analysis, talks with researchers that have not shared data yet, but have been thinking about it for data they’ve already been collecting — can they share it because in terms of consent of what the initial participants gave? Would that include sharing of their data when that wasn’t explicitly asked? Even if that doesn’t and they’re working with more medical kind of patient data, then you can still plan forward and say, “okay, what do we need to add?” A couple of extra sentences to the consent form to allow for this at this point forward even if we can’t do it retrospectively. People are thinking about it even beyond just what’s kind of more apparent in terms of what data is actually available today — little more behind the scenes. The field is shifting in that direction. It’ll just continue along that trajectory.

This Q&A has been edited for length and clarity. Photo of Madan [top] by Dan Lurie and [left] by Yang Liu. This is part of Shareable’s series on the open science movement. Further reading:

How the Mozilla Science Lab is improving access to research and data

The post Could Sharing Research Data Propel Scientific Discovery? appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/could-sharing-research-data-propel-scientific-discovery/2018/01/28/feed 0 69419
LibreTaxi’s Roman Pushkin on Why He Made a Free, Open-Source Alternative to Uber and Lyft https://blog.p2pfoundation.net/libretaxis-roman-pushkin-on-why-he-made-a-free-open-source-alternative-to-uber-and-lyft/2017/06/25 https://blog.p2pfoundation.net/libretaxis-roman-pushkin-on-why-he-made-a-free-open-source-alternative-to-uber-and-lyft/2017/06/25#respond Sun, 25 Jun 2017 10:00:00 +0000 https://blog.p2pfoundation.net/?p=66185 Cross-posted from Shareable. Nithin Coca: With all the controversy engulfing the global ride-hailing giant Uber, there is more attention on alternative platforms that meet people’s transportation needs and don’t have the company’s ethical baggage. One of the newest and most promising alternatives is LibreTaxi, founded by Roman Pushkin, a San Francisco-based developer and architect with a decade of... Continue reading

The post LibreTaxi’s Roman Pushkin on Why He Made a Free, Open-Source Alternative to Uber and Lyft appeared first on P2P Foundation.

]]>
Cross-posted from Shareable.

Nithin Coca: With all the controversy engulfing the global ride-hailing giant Uber, there is more attention on alternative platforms that meet people’s transportation needs and don’t have the company’s ethical baggage. One of the newest and most promising alternatives is LibreTaxi, founded by Roman Pushkin, a San Francisco-based developer and architect with a decade of experience in the technology sector.

LibreTaxi is a completely open-source project, meaning that developers can take the source code and adapt it for local uses. Since it was launched in Dec. 2016, the app, which can be used to find rides across the globe, has grown to 20,000 users. The highest use so far is in Taiwan, Iran, and Russia.

Currently, it is a simple app that can be downloaded and used on the messaging platform Telegram. Through its easy-to-use bot, riders and drivers are directly connected and negotiate prices independently of LibreTaxi, and pay fares in cash. We talked with Pushkin about LibreTaxi, its origins, and how it fits into the larger, ride-hailing and ride-sharing ecosystem.

Nithin Coca: Where did the idea for LibreTaxi originate from? Why did you decide to make it an open-source project?

Roman Pushkin: The idea came from where I was born, in Russia, in a village located far from any big city. There, there were no services like Uber. There was just this list, a piece of paper with phone numbers, and when people were looking for a ride, they were just calling by each number from this list. It was not very convenient, so we tried to improve it with computers. Initially we used Skype chat for this purpose. It worked, but it was not very convenient either – when someone needs a ride you have to scan through all of the messages — where you go, your location, etc.

Public chat is not solving this problems efficiency — it works, but not that great. So I started looking for a way to create application for this purpose. The aim was to create something like Uber, but open source, and free for everyone. Hence, LibreTaxi. LibreTaxi was originally created for rural areas – but also works in cities too.

LibreTaxi is open source because people from India, North and South America, China, from Russia, from any part of the world should be able to use it and customize it.

How is LibreTaxi different from Uber and Lyft?

There are three main differences. The first thing — LibreTaxi is free for drivers. Second,  anyone can register, and anyone can become a driver in just one minute. And the third difference, there’s no built in payment system, so passengers have to pay drivers with cash.

Actually, the aim of LibreTaxi is not to compete with Uber directly. If someone tries to build an application to compete with Uber, this battle is lost already. They spend a lot of money on app development and promotion in different countries.

LibreTaxi is different, and its target is different audiences. For example, in many Latino Communities across the U.S., there are people who are not eligible to work in the U.S., so they can’t drive for Uber. Also, in those communities, many people have outdated vehicles, which are more than 10 years old, so Uber won’t accept you as a driver. There’s no such problem with LibreTaxi. It will be much easier to use LibreTaxi inside that community, to give rides to people you already know. LibreTaxi has the same concept as Uber, but in reality, it is completely different.

We’re targeting different people, people who already know who their passengers are, who their drivers are, and we hope that LibreTaxi can help their own community.

What is your growth strategy going forward? How can you achieve financial stability while also meeting user needs?

Right now, I am working on this only when I have time, in evenings, weekends, but I am planning to work on this full-time. For this, LibreTaxi needs to be more organized.

The very first thing is that we are planning to do create a nonprofit organization for LibreTaxi, because I want people to know that this service is absolutely free, and will stay that way. We are not going to charge drivers and cut their earnings like Uber does. Second thing is that the nonprofit can help us make this application more user friendly, safer, and help us polish some rough edges. Our financial model will be based on donations. We’re not looking to make a lot of money, and we’re not going to be a middleman between passengers and drivers.

Right now, I am paying for all the servers out of my pocket. I can afford that for now, but for the future, if we reach one million users, as is our goal in the next two or three years, we may need more servers than we have now.

Actually, the name LibreTaxi is inspired by LibreOffice, which is a free and open source replacement for Microsoft Office, and they are our model. They are a nonprofit that takes donations, and they’ve grown to 75 million users, and they expect it to be 200 million users by 2020.

Another thing we are considering is to add Blockchain technology to LibreTaxi. Not sure how this will be implemented, as Blockchain is something very new, and we are very early in this game, but, for example, we could enable payments via Bitcoin.

Have the recent, seemingly non-stop headlines about Uber brought more attention, or more users, to LibreTaxi?

Partially, the success, so far, of LibreTaxi was possible because of these events that happened to Uber. But only partially, because LibreTaxi is not the same as Uber. I am working on this application alone, by myself, so it’s not possible to build a shiny app, with all these features like Uber.

How many users do you have? Can Shareable readers download LibreTaxi and expect to find rides (or riders) easily?

It is very [easy] to install the application — just need to install Telegram, and then you can find LibreTaxi, or you can go to our website and follow the instructions.

As for finding rides, we have little bit more than 20,000 users worldwide at the moment, a good number for a two-month-old project. If you look for a ride in areas like Taiwan, or Iran, or Moscow, I think it is possible to find a ride. But if you are looking in other cities, maybe you’ll find a ride, or maybe you won’t.

Even if you can’t find a ride, I hope your readers will be interested in this application because they can use it for their own communities, their own small cities, and even for their own buildings. For example, I live in a complex with 100 apartments, and I’ve listed an advertisement on the wall, where people usually walk by. Now, sometimes I give rides to my neighbors, so you can use this application right now and even try it in your building, or family.

What’s the next step for LibreTaxi, and how do you plan to grow in the future? Do you have a financial plan to ensure both a better product, and sustainability?

Our plan for this year is to add more languages. We’ve already translated the application to 17 languages, and the website is translated to 12 languages. By adding more languages, we hope to reach more people in these countries.

The next step is for us to listen to people about what they need, expect, and see in the application. We want to deliver features they would like to see. Right now, LibreTaxi is something very fresh and it has minimal functionality.

Users are the key to our growth. That’s why we ask, if they like this application, please spread the news — share it in Facebook, public chat channels, etc. It is very important because we do not have any budget for promoting LibreTaxi.

I’m constantly looking for feedback, connections, so if anyone is interested in talking to me, they can find my email on GitHub. Feel free to reach out and tell me about your community, about transportation problems you have, and I’ll try to help you and learn something new from you.

Header photo of traffic in Bangkok, Thailand, by Connor Williams via unsplash

The post LibreTaxi’s Roman Pushkin on Why He Made a Free, Open-Source Alternative to Uber and Lyft appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/libretaxis-roman-pushkin-on-why-he-made-a-free-open-source-alternative-to-uber-and-lyft/2017/06/25/feed 0 66185
The copyright of the Commons has to become dynamic https://blog.p2pfoundation.net/copyright-commons-become-dynamic/2016/09/30 https://blog.p2pfoundation.net/copyright-commons-become-dynamic/2016/09/30#comments Fri, 30 Sep 2016 10:00:00 +0000 https://blog.p2pfoundation.net/?p=60214 Luis Enríquez from Ecuador recently contacted us to share this text on what he calls “Smart Copyright Licenses” (A CopyFair venture). Luís informs us that some projects are already using these early drafts. Luis Enríquez: THE COPYRIGHT OF THE COMMONS HAS TO BECOME DYNAMIC! This is a very challenging affirmation, and I will explain why.... Continue reading

The post The copyright of the Commons has to become dynamic appeared first on P2P Foundation.

]]>
Luis Enríquez from Ecuador recently contacted us to share this text on what he calls “Smart Copyright Licenses” (A CopyFair venture). Luís informs us that some projects are already using these early drafts.

Luis Enríquez:

THE COPYRIGHT OF THE COMMONS HAS TO BECOME DYNAMIC! This is a very challenging affirmation, and I will explain why.

The law is supposed to regulate all human activities, or at least that is what we learn in law school. Despite our legal tradition (common law, or roman law), we always try to fit new human actions into our predefined rules.

A good example is copyright law. The holy grail of Copyright law is still the Bern Convention from 1886, and revised in 1970. Of course, we didn’t have computers or Internet in 1886. However, in 1996 the WIPO copyright treaty assigned the same protection of literary works to software and databases

In my opinion, this was a wrong and lazy choice. Software should not be protected as literary works, because its environmental nature is totally different. Just think about the time of protection of 70 years after the death of the creator. This time of protection may be suitable for a song, or a film. But it is ridiculous for software.

The copyright license has become the way out. The rise of generic purpose public licenses (such as the GPL, MPL, MIT, BSD licenses) happened because they provide the possibility of establishing permissions and obligations for the users, out of the boundaries of the “all rights reserved” copyright law tradition.

Nevertheless, Generic purpose public licenses are still STATIC, reproducing the static nature of copyright law. Software industry adopted the same copyright version methodology of literary works. So you could have the first version of a software released in 2000; the second version released in 2003, and so on.

This version methodology freezes (or hides to the public) development for a period of time, and it is suitable for closed source software, specially when the production and distribution of software is controlled by an enterprise, but not for community projects.

Let’s take a look of what is going on in open source software community projects today. Open source software projects are hosted in open repositories such as GitHub or Bitbucket. Many freelance developers will join the project, making new commits, adding new code, fixing bugs, building libraries, and so on. Users can download the source of the software at any time, as it is always available to download.

This dynamic model of software production brings new legal paradigms such as:

1. Who are the copyright holders? New copyright holders may appear at any time, without a hiring relationship with the former ones.

2. Lack of legal personality. Many Open source software projects don’t own a legal personality. Therefore, in the light of most copyright laws, each contributor still owns his source code contribution.

3. Lack of flexibility. As production is always under development, certain conditions can change, and not be compatible with the original license terms. E.g. New source code license terms are not compatible with the project’s license.

4. How to split retributions, or donations among copyright holders? As there is not hiring relationship, it is FAIR that each contributor receives a part of the global retribution.

A few months ago, I cooperated with an open source developer and bitcoin enthusiast named Dawid Ci??arkiewicz. He helped me to understand the problems of software community projects, today. We worked together on a project named CopyFair Corp, and as a result, we got the CopyFair Software License (CFSL). The first license draft can be read here:

THE COPYFAIR SOFTWARE LICENSE V.0

Our challenge was to create a Retribution based license, for dynamic software production. The CFSL follows the Commons Based Reciprocity guidelines developed by Michael Bauwens, and other well known commoners. However, In the process of writing the CFSL, we got a promising idea, SMART COPYRIGHT LICENSES. The polymorphic nature of the CopyFair, and the CFSL definitely goes in that direction.

A smart Copyright license should automatically resolve the problems of software community projects. It will be updated every time a new commit is made at the software repository. It can operate and be stored in the blockchain, just like smart contracts in Ethereum.

These are some advantages:

1. Polymorphic features. A license can adapt itself to different situations. It could apply some terms for commercial uses and noncommercial uses, while detecting how the software is being exploited. The license is integrated within the source code.

2. Flexibility. If some parameters of the software change, the license will also change. E.g. A new copyright holder contributes 2% of the source code. The license will immediately get an update, adding him as a contributor, and mapping his percentage of retribution.

3. Customization. Developers could adapt the license conditions to bypass environmental restrictions. E.g. Retribution is set in BTC in the countries that BTC is legal, and only if a BTC is => than $200. Otherwise retribution could be paid in Ethers or dollars.

As we can see, it is a very challenging idea. However, the commons transition requires dynamic legal tools in order to adapt the old fashion copyright laws to distributed systems. At this point, it seems very difficult to update copyright law, and that is why we must take the best out of the only legal tool that we have left, THE LICENSE.

Photo by Erik

The post The copyright of the Commons has to become dynamic appeared first on P2P Foundation.

]]>
https://blog.p2pfoundation.net/copyright-commons-become-dynamic/2016/09/30/feed 1 60214