guest – Open Knowledge Danmark http://dk.okfn.org Mon, 21 Sep 2020 18:35:06 +0000 en-US hourly 1 114358919 Gæsteindlæg: Hvorfor frie kort? http://dk.okfn.org/2020/09/21/gaesteindlaeg-hvorfor-frie-kort/ http://dk.okfn.org/2020/09/21/gaesteindlaeg-hvorfor-frie-kort/#respond Mon, 21 Sep 2020 18:33:45 +0000 https://dk.okfn.org/?p=553 Dette gæsteindlæg er skrevet af Leif Lyngby Lodahl, der er It-arkitekt i Ballerup Kommune, og som blandt andet har baggrund som talsmand for open-source kontorpakken Openoffice i Danmark.

© OpenStreetMap contributors & http://hanshack.com/mapxtract/

© OpenStreetMap contributors & http://hanshack.com/mapxtract/


Du kender måske hjemmesiden www.openstreetmap.org. Gør du ikke, vil jeg foreslå, at du prøver. Hjemmesiden er, hvad vi kan kalde Wikipedia for landkort.

Lige som Wikipedia er garant for at der findes fri viden, så er OpenStreetMap garant for at der findes frie kort. Frie kort er vigtige, for kun med frie kort kan vi stole på, at kortet ikke er manipuleret. Kommer­cielle kortudbydere leverer kun den kortservice der tjener deres egen forretning, og der er flere eksempler på, at kommercielle kortudbydere manipulerer kortet for at fremme deres egen sag. Eksempelvis er flere kortudbydere indstillet på, at tilrette kortet efter de lokale myndigheders politiske ønsker. Også selv om det strider mod fx FN eller international accept.

Kortet er dækker hele verden og er meget nøjagtigt og detaljeret. Desuden er kortet gratis at bruge, og der er ingen reklamer. Du bliver heller ikke sporet med henblik på at få vist annoncer andre steder. Fra tid til anden dukker der en lille besked op, som minder om at man kan “spytte i kassen”.

Kortet opdateres vha. åbne data (fx veje og adressepunkter) samt frivilligt arbejde. Hvis du har lyst til at hjælpe, så er du hjertelig velkommen. Alle kan bidrage med stort eller småt. Kig først forbi den danske vejledning: https://wiki.openstreetmap.org/wiki/Da:Main_Page . Det tager ikke mere end et par minutter at bliver oprettet som bruger, og hvad du tegner på kortet bliver synlige efter typisk 5-10 minutter i de tætteste zoomlag.

OpenStreetMap er et kort, men det er meget mere end det, for kortet i sig selv er kun toppen af isbjerget. Det der er mest interessant er de data der ligger bag kortet. De data er nemlig til rådighed for alle, både glade amatører og professionelle aktører. Betingelserne for at anvende data er meget simple: Det må du godt, bare du husker at kreditere og respektere fællesskabet. Det sker typisk ved at anføre kilden synligt på eller under kortet: “© OpenStreetMap contributors”. Det er den eneste pris man skal betale, også hvis man sælger et produkt videre, som bruger disse data.

Lad os se på et par eksempler.

Navigation

Der findes en række gode navigationssystemer, og jeg vil anbefale en app til Android eller Iphone, som hedder Osmand. ( https://osmand.net/ ).

Hvis du har en ny eller gammel Garmin navigator, kan du gratis hente kortopdateringer hos http://frikart.no/garmin/index.html .

Der findes sikkert flere eksempler, men det er de to jeg selv har brugt og kan anbefale.

Kort

På hjemmesiden www.openstreetmap.org kan du i højre side skifte mellem standardkort og et par stykker mere, fx Offentlig transport og Cykelkort. Disse kort er eksempler på, at et kort kan have forskellige formål. Fordi data er frie, har andre aktører mulighed for at lave helt specielle kort til specielle formål. Et eksempel er Hike & Bike ( http://hikebikemap.org/ ) som er et kort specielt rettet mod cyklister og vandrere. På dette kort kan du tilføje ”hillshading”, som giver et indtryk af højdeforskelle i landskabet. Ganske praktisk, når du skal planlægge en vandretur. Et andet eksempel på specialkort er OpenTopoMap ( https://opentopomap.org ) som har lagt vægt på topografi (landskabsbeskrivelse). På dette kort kan du i øvrigt uploade og se ruter som du selv har lavet. I den mere kuriøse ende er der OpenSnowMap ( https://www.opensnowmap.org/ ) som har fokus på skiruter løjper og pister. Et forsøg med at lave 3-dimensionelle kort kan opleves på F4Map ( https://demo.f4map.com/ ).

Et dansk eksempel er FindToilet ( http://www.findtoilet.dk/ ) som findes både som en hjemmeside og som en mobil app. Du kan selv gætte på hvad formålet er.

Hvert kort har sine helt specielle egenskaber, som er målrettet mod et bestemt formål. Det er et eksempel på, at data bag kortet kan bruges til i mange forskellige sammenhænge.

Vær opmærksom på, at der er en vis forsinkelse fra rettelser på OpenStreetMap til rettelserne slår igennem på alle kort. De forskellige kort trækker data med forskellige intervaller. Der kan gå mellem en uge og op til en måned, før rettelserne slår igennem.

Services

Når du på Facebook tjekker ind på en cafe, vises det med et kort fra OpenStreetMap.

Bruger du træningsappen Runkeeper ( www.runkeeper.com ) ser du din løbetur eller cykelrute på et kort fra OpenStreetMap. Du kan i øvrigt downloade din cykeltur som en GPX-fil, og uploade den til OpenStreetMap.

Der findes også en række danske aktører, som tilbyder kortfeatures mod betaling. Et eksempel er Septima ( https://septima.dk/ ), som fx tilbyder et kort, hvor du kan beregne om dit hus eller din terrasse kommer til at ligge i skyggen af et kommende byggeri. Firmaet har i tidens løb lavet rigtig mange brugbare specialløsninger.

Kort på din egen hjemmeside: Du er velkommen til at indlejre kort fra OpenStreetMap på din egen hjemmeside, men vær opmærksom på, at heftig brug er strengt forbudt. Har du meget trafik på din side, bør du henvende dig til en af de mange kommercielle leverandører af online kort fra OpenStreetMap.

Hvem står bag?

Kortet ejes af ingen og af alle. De data der ligger bag kortet er beskyttet af licensbetingelserne: “Frie for altid”. OpenStreetMap ”ejes” af OpenStreetMap Foundation, som er en selvstændig og uafhængig organisation. Meget af den teknik der ligger bag, stilles gratis til rådighed af universiteter og virksomheder verden rundt.

Meget af indholdet kommer fra originale autoritative kilder, i Danmark fx Geodatastyrelsen, Kultur­arvsstyrelsen og de danske kommuner.

Desuden naturligvis den tusindvis af frivillige bidragsydere, som tegner direkte på kortet eller uploader GPX-filer (ruter).

FN bidrager desuden til OpenStreetMap, fx ved at være med til at finansiere HOTOSM (Humanitarian OpenStreetMap Team) Se https://www.hotosm.org/

]]>
http://dk.okfn.org/2020/09/21/gaesteindlaeg-hvorfor-frie-kort/feed/ 0 553
Gæsteindlæg: Åbne offentlige data – en ny form for offentlig service http://dk.okfn.org/2017/09/21/gaesteindlaeg-abne-offentlige-data-en-ny-form-for-offentlig-service/ Thu, 21 Sep 2017 17:21:49 +0000 http://dk.okfn.org/?p=442 Dette er et gæsteindlæg af Kristian Holmgaard Bernth, der er chefrådgiver i firmaet Seismonaut. Indlægget er oprindeligt skrevet som artikel på Seismonauts hjemmeside i en lidt længere version, der kan findes her. Seismonaut har blandt andet arbejdet med hvordan offentlige åbne data i større grad kan finde anvendelse i virksomheder og rådgivet myndigheder omkring deres udstilling af åbne data.

Indlægget er udtryk for skribentens egne holdninger og Open Knowledge Danmark har ingen tilknytning til Seismonaut i øvrigt.

 

Hvad er åbne offentlige data?
Åbne offentlige data er en meget bred betegnelse og et stort virvar, hvor der er mange ting, som hører ind under.

Men der er grundlæggende tre ting, som definerer åbne offentlige data:

  1. Den offentlige del af definitionen betyder, at det er data eller information, som er ejet af en offentlig myndighed eller institution.
  2. Den åbne del betyder, at det er data, som bliver stillet frit til rådighed for alle
  3. Den tredje ting er, at det er data, som ikke kræver nogen særlig form for beskyttelse. Det vil sige, at privatpersoner, organisationer og virksomheder frit kan bruge det, som de vil.

For at opsummere er åbne offentlige data altså data og information i offentlige hænder, som kan deles og bruges frit. Det handler grundlæggende om, at offentlige myndigheder i kraft af deres arbejde samler en masse værdifuld data og information. De data har man ikke tidligere tænkt som en ressource, men de kan have stor værdi for private ved at skabe transparens eller for virksomheder som ressource i deres forretning.

Derudover er åbne offentlige data for mig en ny form for offentlig service på linje med alle mulige andre services, som det offentlige tilbyder på tværs af alle myndigheder. Det er altså ikke kun interessant at se på, hvad data er, men også hvad data er i en offentlig kontekst. For hvis vi ser på data i en offentlig kontekst, er data noget, som de forskellige myndigheder skal skabe, publicere, vedligeholde og gøre synligt, således at det står til rådighed for, de folk som skal bruge det.

Øget fokus på datadrevet udvikling

Hvorfor skal vi interessere os for emnet?
Åbne offentlige data spiller ind i en større sammenhæng, hvor vi generelt begynder at snakke mere om data og datadrevet udvikling. Åbne offentlige data er bare én form for data. Derudover er der også en masse private virksomheder, som har data, de begynder at sælge, fordi det kan bruges som en ressource i forhold til udvikling.

For at give et overblik snakker vi om tre kategorier indenfor data:

  1. Egne data
  2. Andres data
  3. Åbne offentlige data (en underkategori af andres data)

Grunden til, vi snakker om emnet er, at data er en ressource i forretnings-, samfunds- og produktudvikling på tværs af brancher.

Der er fokus på området, fordi det er interessant at snakke om lige nu, hvor virksomheder og organisationer er blevet bedre og bedre til at arbejde med datadrevet udvikling. Det skyldes også det store vækstpotentiale, der ligger i at få frigjort al den data, som offentlige myndigheder sidder på, og lade virksomheder og organisationer bruge dataen som ressource.

Vækstpotentialet kan forstås ligesom varer på hylderne i supermarkederne: Hvis alle varerne bliver frit tilgængelige for virksomheder, der kan bruge dem i udviklingen af deres produkter eller forretning, vil alle have bedre ressourcer til rådighed for at skabe nyt. Hvis virksomhederne har flere ingredienser til rådighed til deres produktudvikling, må man forvente at outputtet er nogle mere interessante produkter end dem, vi havde før. Det er derfor, vi snakker om data – fordi det er en potentiel værdiskabende ressource.

Vi er godt med i Danmark

Hvad er status i forhold til at stille data til rådighed?
Overordnet set er Danmark godt med sammenlignet med andre lande. Det offentlige Danmark har over de sidste mange år været rigtig gode til at digitalisere både deres systemer og deres arbejdsgange, og deri er grundlaget for, at der opstår en masse digitaliseret data. I forhold til de store brede data, er vi godt med f.eks. i forhold til miljø- og geodata, hvor Danmarks Miljøportal og DMI har en masse offentlig data. Men udover det er det stadig meget på græsrodsniveau. Der er tale om ildsjæle rundt i kommuner og andre myndigheder, som har taget initiativ og er begyndt at publicere en masse data. Det har til gengæld ført til, at datalandskabet er en anelse fragmenteret.

Eksempel på brug af åbne offentlige data
I de byer, der har betalte offentlige parkeringspladser, samler kommunerne en enorm mængde data. Gennem parkeringsanlæg eller billet-apps bliver der samlet data om, hvor, hvornår og hvor længe vi parkerer, når vi benytter pladserne. De data har tidligere ligget ubrugt hen, gemt i databaser, men de kan være en vigtig ressource. I Københavns Kommune arbejder man derfor nu på at få de data i hænderne på virksomheder, der kan bruge dem til at hjælpe byens borgere til nemmere at finde en ledig parkeringsplads i myldretiden.

Hvilken værdi skaber åbne data for henholdsvis det offentlige og os borgere?
Virksomhederne efterspørger især data om borgernes adfærd. Hvem er de, hvor er de henne, og hvad gør de? Alle sådanne informationer er interessante, fordi de handler om mennesker. Den type data kan bruges som en ressource for virksomheder til at lave nogle bedre løsninger. Derfor skaber det værdi for borgerne, idet dataressourcen kommer i hænderne på dem, som udvikler løsninger til deres hverdag. På den offentlige side kan fritstillet data bruges til at skabe bedre borgerrettede offentlige services f.eks. internt i offentlige myndigheder.

 

Hold fast i nysgerrigheden og støt eksisterende initiativer

Hvordan tror du, udviklingen bliver i fremtiden inden for åbne offentlige data? 
Jeg ser det sådan, at græsrodsniveauet har en værdi i sig selv. Jeg tror ikke, det bliver centraliseret i fremtiden – men det er klart, at vi især på udbudssiden formentligt bliver mere koordinerede, så vi forhåbentlig udnytter den vækstskabende ressource, som ligger i datadrevet udvikling.

Jeg synes dog, det er vigtigt at holde fast i nysgerrigheden, og den personlige drivkraft for det teknologiske, og det data kan. Det handler om at støtte op om alle de gode idéer og initiativer, så de folk, som sidder tæt på data, får de rigtige redskaber til at arbejde med at publicere data både i offentligt og privat regi.

Udviklingen sker ikke kun på baggrund af en politisk dagsorden – det bliver nødt til stadig at handle om, hvordan vi får mennesker til at mødes, og hvad vi kan gøre for de mennesker, som brænder for det her område.

]]>
442
Gæsteindlæg: Something’s (Johnny) Rotten in Denmark http://dk.okfn.org/2017/04/21/gaesteindlaeg-somethings-johnny-rotten-in-denmark/ Fri, 21 Apr 2017 18:11:30 +0000 http://dk.okfn.org/?p=415

Dette er et gæsteindlæg af Jason Hare, der er Open Data Evangelist hos OpenDataSoft. I indlægget beskrives initiativet med Københavns “City Data Exchange“, der samler data fra forskellige åbne og lukkede kilder og udgør en form for markedsplads for data. Fra Open Knowledge Danmarks side er vi særligt skeptiske overfor ideen om at tage åbne data offentliggjort på åbne platforme (som Københavns åbne data portal, der er baseret på CKAN) og genpublicere dem bag login og med begrænsning i vilkårene for videreanvendelse. Indlægget blev oprindeligt postet på Jason Hares egen blog og er udtryk for skribentens egne holdninger.

Hitachi Insight Group Repackaging Open Data

Hitachi Insight Group’s City Data Exchange pilot in Copenhagen is the latest attempt at a model of monetization around Open Data. There are some hurdles, both practical and ethical, that this model will have to overcome. Repackaging public data for resell limits re-use, accessibility and may push ethical boundaries when public data is enriched with private data.

I see ethical problems taking two tracks:

  • Personally identifiable information is more likely when public data is enriched with private data and used in a less-than-transparent manner. This is also known as the “Mosaic Effect”. “Mosaic Effect”, small fact, is a term invented by the intelligence community in the US. No longer do we have transparent government, instead we have transparent citizens. For more information on this etymological footnote, read Victor V. Ramraj’s brilliant book Global Anti-Terrorism Law and Policy.
  • Public data, Open Data, has been paid for by taxpayers. The data is a public asset and should not be given away to private sector companies that have no transparency requirements. See Ade Adewunmi’s brilliant piece written on the UK’s Government Digital Service Blog for more on this. Also a blog post I wrote based on my work at the White House Open Data Roundtables in regards to data as an asset on the OpenDataSoft blog.

The Data Exchange Model Already Includes You.

Many years ago I worked as a Vice President for a data exchange company. This company, RateWatch, packaged and resold bank rate data to banks. The banks found it cheaper to buy rates from the largest database of bank rates in the world rather than try to gather the intelligence themselves.

Selling Access is not what Smells About this Deal

APIs usually have tokens and these tokens can be throttled. This is to prevent abuse of the API and the underlying data. Governments that sell a premium access to these API are a different animal to what Hitachi is doing. Consuming millions of rows of data is not something the average person does. Selling access to the API, with a Service Level Agreement (SLA), allows public sector to make the data more reusable.

Local government can do this with other assets: toll roads; industrial use of natural resources; access to medical care; an expectation of public safety. All of these municipal services have a basic free level and a level at which there are additional fees. Consider transportation, if you drive a car you pay taxes and gasoline to drive. Taking a bus is less expensive and there are no taxes. In the same way data can be distributed more or less equally. The real difference is in the velocity of data consumption.

Examples of Companies that Collect and Repackage data:

  • Axiom sells data subscriptions to, among other customers, users of SalesForce.
  • XOR Data Exchange offers customer acquisition risk mitigation through subscriptions to credit profiles of consumers. Your cable company probably uses XOR.
  • BDEX offers persona data including spending habits, entertainment habits and political affiliation.
  • LexisNexis sells data analytics supporting compliance, customer acquisition, fraud detection, health outcomes, identity solutions, investigation, receivables management, risk decisioning and workflow optimization.
  • ESRI repackages public (open) data from US Agencies and offers subscriptions to its ArcGIS online service. The data is now in a non-reusable, proprietary format.
  • Hitachi Insight Group and the City of Copenhagen will collect and resell public data to private interests.

It’s a long and somewhat unsettling list

These companies spend money to gather information about all of us based on our commercial and entertainment habits. They then sell this data to marketing companies looking to remarket to all of us. The deal is we exchange our data in return for small benefits at the gas pump, the grocery store, the movie theatre and probably every place you shop. That is ok. We can opt out. We know it’s happening and we play along.

How the Hitachi Deal Works

The idea of data exchanges has been around probably as long as humans have been writing things down. Now that most of us operate in digital environments on a daily basis, it is not surprising that companies have figured out that data is money.

Hitachi Insight Group approach the City of Copenhagen. The City pledged $1.3 million and Hitachi matched these funds at 2:1. Note again that Hitachi is using money from the [local] government. This money is used to incentivize the private sector to invest money in making the data suitable and reliable for data sharing. In this scheme, the City recovers some of its upfront costs in making the data suitable for release. Hitachi plans to license its technology to other cities with a one time startup fee after which there will be no further obligations on the part of the government.

This implies that all of the revenue then goes back to the Hitachi Group. Hitachi does not know if this is a viable model and neither does the City of Copenhagen. At best, the City achieved a goal of limited value, it recovered some capital. At worst, the city short-circuited its own Smarter City initiative.

When we talked about access to APIs and cities wanting to charge for premium access, we decided that was ok. The City has an obligation to taxpayers to recover any revenue possible. Residents can access the API without a token for research or data storytelling, business can pay a small fee to increase the velocity of the data harvested from the Open Data Portal.

What makes the Hitachi deal so bad for Copenhagen is that it does not solve the data dissemination issue. Hitachi will control the data market and all access to the data.

 Open Knowledge Danmark har også tidligere har et gæsteindlæg om samme emne med titlen: Impressions of City Data Exchange Copenhagen.

]]>
415
Energinet.dk will use CKAN to launch Energy DataStore – a free and open portal for sharing energy data http://dk.okfn.org/2017/01/24/407/ Tue, 24 Jan 2017 07:58:39 +0000 https://dk.okfn.org/?p=407 Dette er en repost af et indlæg fra Open Knowledge Internationals blog.

 

Open data service provider Viderum is working with Energinet.dk, the gas and electricity transmission system operator in Denmark, to provide near real-time access to Danish energy data. Using CKAN, an open-source platform for sharing data originally developed by Open Knowledge International, Energinet.dk’s Energy DataStore will provide easy and open access to large quantities of energy data to support the green transition and enable innovation.

Image credit: Jürgen Sandesneben, Flickr CC BY

Image credit: Jürgen Sandesneben, Flickr CC BY

What is the Energy DataStore?

Energinet.dk holds the energy consumption data from Danish house-holds and businesses as well as production data from windmills, solar cells and power plants. All this data will be made available in aggregated form through the Energy DataStore, including electricity market data and near-real-time information on CO2 emissions.

The Energy DataStore will be built using open-source platform CKAN, the world’s leading data management system for open data. Through the platform, users will be able to find and extract data manually or through an API.

“The Energy DataStore opens the next frontier for CKAN by expanding into large-scale, continuously growing datasets published by public sector enterprises”, writes Sebastian Moleski, Managing Director of Viderum, “We’re delighted Energinet.dk has chosen Viderum as the CKAN experts to help build this revolutionary platform. With our contribution to the success of the Energy DataStore, Viderum is taking the next step in fulfilling our mission: to make the world’s public data discoverable and accessible to everyone.”

Open Knowledge International’s commercial spin-off, Viderum, is using CKAN to build a responsive platform for Energinet.dk that publishes energy consumption data for every municipality in hourly increments with a look to provide real-time in future. The Energy DataStore will provide consumers, businesses and non-profit organizations access to information vital for consumer savings, business innovation and green technology.

As Pavel Richter, CEO of Open Knowledge International explains, “CKAN has been instrumental over the past 10 years in providing access to a wide range of government data. By using CKAN, the Energy DataStore signals a growing awareness of the value of open data and open source to society, not just for business growth and innovation, but for citizens and civil society organizations looking to use this data to address environmental issues.”

Energinet.dk hopes that by providing easily accessible energy data, citizens will feel empowered by the transparency and businesses can create new products and services, leading to more knowledge sharing around innovative business models.

 

 

Notes:

Energinet.dk
Energinet.dk owns the Danish electricity and gas transmission system – the ‘energy’ motorways. The company’s main task is to maintain the overall security of electricity and gas supply and create objective and transparent conditions for competition on the energy markets.
CKAN
CKAN is the world’s leading open-source data portal platform. It is a complete out-of-the-box software solution that makes data accessible – by providing tools to streamline publishing, sharing, finding and using data. CKAN is aimed at data publishers (national and regional governments, companies and organizations) wanting to make their data open and available. A slide-deck overview of CKAN can be found here.
Viderum
Viderum is an open data solutions provider spun off from Open Knowledge, an internationally recognized non-profit working to open knowledge and see it used to empower and improve the lives of citizens around the world.
Open Knowledge International
Open Knowledge International is a global non-profit organisation focused on realising open data’s value to society by helping civil society groups access and use data to take action on social problems. Open Knowledge International does this in three ways: 1) we show the value of open data for the work of civil society organizations; 2) we provide organisations with the tools and skills to effectively use open data; and 3) we make government information systems responsive to civil society.
]]>
407
Gæsteindlæg: Open Energy Days – Hackathon med åbne energidata http://dk.okfn.org/2016/07/15/gaesteindlaeg-open-energy-days-hackathon-med-abne-energidata/ Fri, 15 Jul 2016 20:00:44 +0000 https://dk.okfn.org/?p=374 Dette er et gæsteindlæg af Matti Bugge, der er digitaliseringskonsulent hos Kultur og Borgerservice i Aarhus Kommune og tidligere blandt andet har arbejdet med initiativet Open Data Aarhus.


Open energy Days

Open energy Days

Der sker i øjeblikket meget på energidataområdet. Flere kommuner er i gang med at udvikle systemer, der kan vise information om deres bygningers energiforbrug, der kortlægges forbrugsmønstre i private husstande og der findes adskillige forskningsprojekter, der arbejder med adfærd og ny smart teknologi. Til september bliver der åbnet op for nogle af disse data til hackathonet Open Energy Days, der afholdes i Aarhus af Open Data DK og Erhvervsstyrelsen. Det bliver muligt at se på kommunernes data om forbrug i egne bygninger, husstande og der arbejdes på også at få adgang til energidata fra flere private firmaer. Formålet med arrangementet er, at der bliver skabt nye bud på innovative måder at udnytte de enorme mængder data på området, til at skabe nye samfundsrelevante løsninger og services.

Deltager man til arrangementet bliver det altså muligt at få adgang til et hidtil relativt lukket datafelt. Derudover er der en flot iværksætterpræmie til vinderne af Open Energy Days, som modtager rådgivning fra en række firmaer til at omsætte ideen fra hackathonet til et reelt start-up. Heriblandt fx Systematic, BIIR og WElearn.

Det er gratis at deltage, og arrangørerne sørger for fuld forplejning hele weekenden. Alle kan deltage uanset baggrund, og der ledes især efter studerende, iværksættere og virksomheder, der har interesse for åbne data, energiområdet, innovation eller iværksætteri.

Open Energy Days finder sted fra d. 22. – 25. september på Dokk1, Hack Kampmann’s Plads 2, 8000 Aarhus. Læs mere på: Open Energy Days.

]]>
374
Gæsteblog: Impressions of City Data Exchange Copenhagen http://dk.okfn.org/2016/06/27/gaesteblog-impressions-of-city-data-exchange-copenhagen/ Mon, 27 Jun 2016 07:58:07 +0000 https://dk.okfn.org/?p=306 Dette er et gæsteindlæg af Leigh Dodds, der rådgiver omkring åbne data og er tilknyttet Open Data Institute (ODI). I indlægget beskrives initiativet med Københavns “City Data Exchange“, der samler data fra forskellige åbne og lukkede kilder og udgør en form for markedsplads for data. Fra Open Knowledge Danmarks side er vi især skeptiske overfor ideen om at tage åbne data offentliggjort på åbne platforme (som Københavns åbne data portal, der er baseret på CKAN) og genpublicere dem bag login og med begrænsning i vilkårene for videreanvendelse. Indlægget blev oprindeligt postet på Leigh Doods egen blog og er udtryk for skribentens egne holdninger.


 

First Impressions of Copenhagen’s City Data Exchange

Copenhagen have apparently launched their new City Data Exchange. As this is a subject which is relevant to my interests I thought I’d publish my first impressions of it.

The first thing I did was to read the terms of service. And then explore the publishing and consuming options.

Current Contents

As of today 21st May there are 56 datasets on the site. All of them are free.

The majority seem to have been uploaded by Hitachi and are copies of datasets from Copenhagen’s open data portal.

Compare, for example this dataset on the exchange and the same one on the open data portal. The open version has better metadata, clearer provenance, more choice of formats and a download process that doesn’t require a registration step. The open data portal also has more datasets than the exchange.

Consuming Data

Datasets on the exchange can apparently be downloaded as a “one time download” or purchased under a subscription model. However I’ve downloaded a few and the downloads aren’t restricted to being one-time, at least currently.

I’ve also subscribed to a free dataset. My expectation was that this would give me direct access to an API. It turns out that the developer portal is actually a completely separate website. After subscribing to a dataset I was emailed with a username and password (in clear text!) with instructions to go and log into that portal.

The list of subscriptions in the developer portal didn’t quite match what I had in the main site, as one that I’d cancelled was still active. It seems you can separately unsubscribe to them there, but its not clear what the implications of that might be.

Weirdly there’s also a prominent “close your account” button in the developer portal. Which seems a little odd. Feels like two different products or services have been grafted together.

The developer portal is very, very basic. The APIs expose by each dataset are:

  • a download API that gives you the entire dataset
  • a “delta” API that gives you changes made between specific dates.

There are no filtering or search options. No format options. Really there’s very little value-add at all.

Essentially the subscribing to a dataset gives you a URL from which you can fetch the dataset on a regular basis rather than having to manual download it. There’s no obvious help or support for developers creating useful applications against these APIs.

Authorising access to an API is done via an API key which is added as a URL parameter. They don’t appear to be using OAuth or similar to give extra security.

Publishing Data

In order to publish data you need to have provided a contact phone number and address. You can then provide some basic configuration for your dataset:

  • Title
  • Description
  • Period of update: one off, hourly, daily, weekly, monthly, annual
  • Whether you want to allow it to be downloaded and if so, whether its free or paid
  • Whether you want to allow API access and if so, whether its free or paid

Pricing is in Kronor and you can set a price per download or a monthly price for API access (such as it is).

To provide your data you can either upload a file or give the data exchange access to an API. It looks like there’s an option to discuss how to integrate your API with their system, or you can provide some configuration options:

  • Type – this has one option “Restful”
  • Response Type – this has one option “JSON”
  • Endpoint URL
  • API Key

When uploading a dataset, you can tell it a bit about the structure of the data, specifically:

  • Whether it contains geographical information, and which columns include the latititude and longitude.
  • Whether it’s a time series and which column contains the timestamp

This is as far as I’ve tested with publishing, but looks like there’s a basic workflow for draft and published datasets. I got stuck because of issues trying to publish and map a dataset that I’d just downloaded from the exchange itself.

The Terms of Service

There are a number interesting things to note there:

Section 7, Payments: “we will charge Data Consumers Service Delivery Charges based on factors such as the volume of the Dataset queried and downloaded as well as the frequency of usage of the APIs to query for the Datasets

It’s not clear what those service delivery charges will be yet. The platform doesn’t currently provide access to any paid data, so I can’t tell. But it would appear that even free data might incur some charges. Hopefully there will be a freemium model?

Seems likely though that the platform is designed to generate revenue for Hitachi through ongoing use of the APIs. But if they want to raise traffic they need to think about adding a lot more power to the APIs.

Section 7, Payments: As a Data Consumer your account must always have a positive balance with a minimum amount as stated at our Website from time to time

Well, this isn’t currently required during either registration or signing up to subscribe to an API. However I’m concerned that I need to let Hitachi hold money even if I’m not actively using the service.

I’ll also note that in Section 8, they say that on termination, “Any positive balance on your account will be payable to you provided we receive payment instructions.” Given that the two payment options are Paypal and Invoice, you’d think they might at least offer to refund money via PayPal for those using that option.

Section 8, Restrictions in use of the Services or Website: You may not “access, view or use the Website or Services in or in connection with the development of any product, software or service that offers any functionality similar to, or competitive with, the Services

So I can’t, for example, take free data from the service and offer an alternative catalogue or hosting option? Or provide value-added services that enrich the freely available datasets?

This is pure protecting the platform, not enabling consumers or innovation.

Section 12, License to use the Dataset: “Subject to your payment of any applicable fees, you are granted a license by the Data Provider to use the relevant Dataset solely for the internal purposes and as otherwise set out under section 14 below. You may not sub-license such right or otherwise make the Dataset or any part thereof available to third parties.

Data reuse rights are also addressed in Section 13 which includes the clause: “You shall not…make the Dataset or any part thereof as such available to any third party.

While Section 14, explains that as a consumer you may “(i) copy, distribute and publish the result of the use of the Dataset, (ii) adapt and combine the Dataset with other materials and (iii) exploit commercially and noncommercially” and that: “The Data Provider acknowledges that any separate work, analysis or similar derived from the Dataset shall vest in the creator of such“.

So, while they’ve given clearly given some thought to the creation of derived works and products, which is great, the data can only be used for “internal purposes” which are not clearly defined especially with respect to the other permissions.

I think this precludes using the data in a number of useful ways. You certainly don’t have any rights to redistribute, even if the data is free.

This is not an open license. I’ve written about the impacts of non-open licenses. It appears that data publishers must agree to these terms too, so you can’t publish open data through this exchange. This is not a good outcome, especially if the city decides to publish more data here and on its open data portal.

The data that Hitachi have copied into the site is now under a custom licence. If you access the data through the Copenhagen open data portal then you are given more rights. Amusingly, the data in the exchange isn’t properly attributed, so it break the terms of the open licence. I assume Hitachi have sought explicit permission to use the data in this way?

Overall I’m extremely underwhelmed by the exchange and the developer portal. Even allowing for it being at an early stage, its a very thin offering.I built more than this with a small team of a couple of people over a few months.

It’s also not clear to me how the exchange in its current form is going to deliver on the vision. I can’t see how the exchange is really going to unlock more data from commercial organisations. The exchange does give some (basic) options for monetising data, but has nothing to say about helping with all of the other considerations important to data publishing.

]]>
306
Rapport fra data expedition om Københavns cykelstidata http://dk.okfn.org/2014/09/15/rapport-fra-data-expedition-om-kobenhavns-cykelstidata/ http://dk.okfn.org/2014/09/15/rapport-fra-data-expedition-om-kobenhavns-cykelstidata/#comments Mon, 15 Sep 2014 10:26:19 +0000 https://dk.okfn.org/?p=158  

Dette er en gæsteblogpost af Michele Kovacevic, som afholdte en School of Data data expedition til Kopenlab Festivalen i sommers i København. Her er hendes rapport (cross-post fra School of Data bloggen, se originalen her).

Pink and Copenhagen Blue

When I signed up to the School of Data mailing list, I didn’t quite know what I was getting myself into.

Within two days of joining, I was invited to lead a data expedition at the Kopenlab citizen science festival alongside the EuroScience Open Forum in Copenhagen, Denmark.

My first reaction was trepidation (I didn’t know what a data expedition was and I haven’t worked extensively with datasets for a few years) but Michael Bauer at the School of Data assured me that it would be a fun learning experience. So I enthusiastically agreed and my first quest with data began.

I quickly learned that a data expedition aims to discover stories hidden in the ‘Land of Data’. As the expedition guide, I would set the topic and encourage my expedition team to work together to solve real-life problems, answer questions and tell stories with data.

An important side note (and one I reiterated several times during the expedition) is that there are no right answers and no expected final output. The point of a data expedition is to think freely and creatively, learn from each other and hopefully develop some new skills and a lifelong love of data.

Given Copenhagen’s reputation as the most bike friendly city in the world, we choose to focus on the comprehensive cycling statistics that Denmark collects every day.

For example, did you know that more people in greater Copenhagen commute by bicycle than everyone who rides bikes to work in the entire United States? This information can be found in easily accessible datasets such as the EU public dataset and Denmark’s national statistics database.

We came up with a few guiding to stimulate the imaginations of our expedition team:

How far do I have to walk to get a bikerack in Copenhagen?

Are there areas where bikeracks are more dense and how does this correlate with where people are riding bikes?

How many bike accidents are caused in Copenhagen because cyclists are drunk?

Do more young or old people ride bikes in Copenhagen?

At which age do people spend most money on bicycles?

So armed with some sample datasets, a laptop and flipchart, I set off to Copenhagen to meet Ioana, Deepak, Akash, Mirela and Tobias – my expedition team.

After finding 10 things in common with each other, our first task was to work out everyone’s strengths and weaknesses so we could set loose roles. Ioana became our analyst & engineer (data cruncher), Deepak and Akash were our storytellers (found interesting angles to explore and shaped the final story), Mirela was our scout (data hunter) and Tobias was our designer (beautify the outputs to make sure the story really comes through).

Our next task was to come up with our expedition questions and we took to this task very enthusiastically, coming up with more questions than we had time to explore! To make the questions easier to tackle, we decided to group them by theme (travel usage, life cycle/accidents/rules/compliance, geographical stats, economics, policy, culture). The group split in half to tackle two different sets of questions.

flipchart Deepak, Akash and Tobias looked at what policies influenced cycling adoption in Denmark and compared these to a number of different cities across the world.

Mirela and Ioana mapped the number of cyclists in different areas of Copenhagen in order to develop a density map, which could be overlayed with other data such as where cyclists are most active at certain times of day, accident rates/locales and bikerack density.

We spent the next two hours of the expedition searching and scraping various datasets (a full list can be found in this googledoc) in order to come up with our stories to tell the Kopenlab citizen science festival attendees.

We came across a few hurdles, namely the “cleanness” and consistency of the data. Often the datasets were only available as PDFs (CSV and excel spreadsheets are much easier to work with) and data headers often didn’t have references.

“It would be nice to have it all in a bigger table,” Ioana said.

In the face of these challenges we gave each other a helping hand to find alternative exploration routes (much like a real quest, really).

Another one of the great aspects of a data expedition the focus on skill sharing. Ioana had a great understanding of Google fusion tables so she was able to show some of the other participants how to sort and analyse data using this tool. Unfortunately we didn’t get much time to explore the plethora of open source data analysis and visualization tools (some are listed on page 5 of this doc).

So after three hours traversing the wilds of Copenhagen’s bike data we had two stories to tell.

Ioana presented her team’s heat map showing that the number of cyclists was most dense in the northwest part of Copenhagen.

Deepak presented his team’s infographic showing that many factors influence cycling usage in urban centers:

infograph

We had a great time exploring these datasets, but with the short time we had available, we only really scraped the surface of Copenhagen’s bike data stories.
Luckily Matias and his bikestorming crew ran another expedition in Copenhagen two months later and were able to build on what we learnt…

Stay tuned for part two of our biking blog series written by Matias Kalwill, founder and designer of Bikestorming.

More pics from Kopenlab here and here.

]]>
http://dk.okfn.org/2014/09/15/rapport-fra-data-expedition-om-kobenhavns-cykelstidata/feed/ 1 158
Gæsteblog: The Danish Open Government Partnership Action Plan http://dk.okfn.org/2014/04/02/gaesteblog-the-danish-open-government-partnership-action-plan/ http://dk.okfn.org/2014/04/02/gaesteblog-the-danish-open-government-partnership-action-plan/#respond Wed, 02 Apr 2014 10:46:29 +0000 https://dk.okfn.org/?p=61 MAASSEN_DEF2

Dette er en gæsteblogpost af Paul Maassen, der er Civil Society Coordinator for Open Government Partnership (OGP). Han arbejder bl.a. med at støtte engagementet mellem OGP partnerskabet og civile stakeholders i alle programmets 63 lande, og har tidligere arbejdet som Head of Finance and Partnership for WWF Internationals ‘Global Climate & Energy Initiative’ og som Program Manager for ICT & Media programmet for den hollandske udviklingsorganisation Hivos.

Vi er i Open Knowledge Foundation Danmark meget interesserede i Open Government Partnership, og vi har tidligere bl.a. givet feedback på den nedenfor omtalte action plan. Hvis du har lyst til at være med i diskussionen og øge den danske indflydelse – så hop med på vores diskussionsliste og giv dit besyv med. Se yderligere materialer om Danmarks deltagelse nederst i denne blog post.

What is the Open Government Partnership (OGP) in 109 words?

OGP creates a platform for local reformers to make government more open. Government and civil society work together to develop and implement ambitious reforms around open government, making government more open and accountable, and to improve the responsiveness to and engagement with citizens. Since 2011, OGP has grown from 8 participating countries to now 63 – the last country to join was Tunisia. Results include a surge in open data platforms, big steps on fiscal transparency and beneficial openness in a range of countries and finally a Freedom of Information law in Brazil. There are three main steps: in the mechanism: open consultation, concrete, ambitious commitments and independent monitoring.

Looking at Denmark’s achivements

Denmarks achievements in the first round have been assessed by Mads Kæmsgaard Eberholst and his report is now online (English and Danish) and open for comments the next couple of weeks.
The results look pretty good compared to other countries. A substantial part of the commitments are actually met. Downside is the small ownership with government entities and the relative narrow set of issues addressed in the plan (tech focused). Also the actual partnership of government and civil society could still be strengthened quite a bit. The report does give good recommendations at the end.

What’s next?

This is a crucial period. With the launch of the report also comes the launch of the next consultation and negotiation for ambitious commitments. The next round needs to be better than the first for sure. Some suggestions on how to advocate around the report:
By commenting in public you can make a clear statement on what you agree with or not in the draft report. These comments will be posted alongside the final version of the report. You may also add additional information that, for whatever reasons, was not included in the report.
You can use the draft conclusions on the consultation process for the first NAP – to push your government for an improved process for the next Action Plan. A resource that can be helpful herein is the study on country experiences captured here. The IRM report will also include a useful checklist of the requirements for consultation in the OGP Articles of Governance.
You can use the draft conclusions on commitment delivery and priority setting to draft new commitments. Consulting the Open Government Guide can also be helpful in this.
You can use the draft report to start preparing your response for when the final IRM report is launched – press release, campaign etc.

In general, what is of key importance is that countries should take advantage of the release of their first IRM report to inform the development of their second Action Plan. Denmark has to submit a new draft Action Plan by April 30, discuss that at the European Regional Meeting (8/9 May Dublin) and upload the final version by June 15. That means the consultation needs to happen the next couple of months. This is the guidance that has been shared with your government – you can use it to advocate for real involvement

If you want more information about the IRM, have a look at the FAQ or contact the IRM program staff (they have an “open door” policy). If you want to know more about good press events surrounding the IRM and/or civil society responses to the report – get in touch with colleagues in South Africa  and the Philippines. Mexico, Norway, and Philippines are good examples of integrating the IRM into the next action planning cycle. Furthermore, key data of the IRM researchers will be released by the IRM team. You can find it here, including a data guide. There are plenty of open data geeks in the OGP community that can make a nicely visualised country overview and comparison out of these.

If interested, there is quite a bit of information on how other countries have done it, lessons learned from the first round, guidance on how to do consultation better. And there is a mailing list for the international civil society community around OGP.

In conclusion: the IRM is as strong as we make it. With strong input, convincing own monitoring efforts, and strategic campaigning around the report launch we can influence the strength of the second round of consultations and commitments!

Background information

Read more about Open Government Partnership on the official website. Denmark is one of the 63 members of the Open Government Partnership.

Denmark’s achievements in the first round have been assessed by journalist Mads Kæmsgaard Eberholst and his report is now online (English and Danish) and you can read submitted comments here.

There is quite a bit of information on how other countries have done it, lessons learned from the first round, guidance on how to do consultation better. And there is a mailing list for the international civil society community around OGP.
]]>
http://dk.okfn.org/2014/04/02/gaesteblog-the-danish-open-government-partnership-action-plan/feed/ 0 61
Open Data Day 2014 i København http://dk.okfn.org/2014/02/17/open-data-day-2014-i-kobenhavn/ http://dk.okfn.org/2014/02/17/open-data-day-2014-i-kobenhavn/#comments Mon, 17 Feb 2014 10:31:23 +0000 https://dk.okfn.org/?p=63 odd14_banner

På lørdag (d. 22/2) fejres åben data dag med hackathons og data-arrangementer i hele verden. I København er der planlagt fire spændende workshops. Alle er velkomne til at deltage, lære mere om åben data, anvende konkrete åbne data, diskutere og møde andre med interesse i åbne data. OKFN-DK står sammen med Wikimedia Danmark for en data sprint omhandlende klimadata.

Indlægget herunder er skrevet af Lenka Hudakova.

 

Join us for Open Data Day in Copenhagen!

Open Data klistermærker

On Saturday February 22nd open data is celebrated with events worldwide. In Copenhagen four workshops will take place at CBS. Everyone is welcomed to join to learn about open data, discuss its utilization options, take part in the event’s activities and meet with other like-minded people.

In a nutshell, open data is data freely available to everyone to use, reuse and redistribute without restrictions. For instance, with open government data, it is possible to leverage applications and services for public or private sector. Open government data can also account for the transparency and accountability of public spending. The open data movement views open data as enabler of socio-economic development (health care, education, economic productivity, etc.) through understanding of important issues and thus leading to better decision-making.

Copenhagen is one among many cities that participate in the global event. This year’s Copenhagen Open Data Day will comprise of following 4 activities:

 

  • Data sprint with Open Knowledge Foundation Denmark and Wikimedia Denmark
    For this workshop, we chose interesting environmental/climate open government data to work with. We will dig into the datasets, analyze them, look for stories and do data visualizations. In addition, we are hoping to have a couple of environmental NGO representatives to drop by as guests and share their insights from this topic area. Here are some of the interesting datasets that we’re going to look into: Danish data on CO2 emmissions from Region Syddanmark, International climate data from the World Bank, official Danish energy data from Danish Energy Agency. Bring your own ideas for analysis and visualizations or get inspired by the participants at the workshop. During the sprint we are arranging to have a hangout with a foreign open data day event.

 

  • Hackaton with Gatesense
    At the Hackaton we will play with real-time open data samples from Municipality of Copenhagen, Aarhus and international sources to create innovative ideas, demos and solutions to optimize our resources: Water, Energy, Waste and Transportation. Please note that this workshop will kick-off early (Friday February 21 at CBS, Dalgas Have ground floor, DSO089) and requires separate registration.

 

  • Remixing SMK with BYOB (Bring Your Own Beamer)
    Bring Your Own Beamer is a fun way to turn still images into videos. Under the guidance of Jacob Sikker Remin and inputs from Marieke Verbesiem on stop motion technique, we will use works from SMK’s collection released under open licensing. This activity is collaboration with the ongoing FROST festival, and the created remixes are meant to be exhibited as a part of the festival’s social event (at Tøjhusemuseet, 19-23.00). Please note that Jacob will welcome to hear from you in case you would like to participate (Jacob@sciencefiction.dk, sub. BYOB).

 

  • Hackathon on the city’s challenges with KK Data Portal and Aarhus Open Data
    We will delve into societal challenges related to climate change, mobility, and population growth using real-time open data from Copenhagen and Aarhus municipalities together with other international sources. Although the solutions might not come instantly, hackaton is a great opportunity to demonstrate the value of working with open data and the value of cooperating across the city’s many stakeholders.

 

For anyone interested in taking part, please find more information at dk.opendataday.org and Facebook. And remember, regardless your experience with open data, everyone with open mind and interest in this cause is warmly welcomed to participate, learn, network, and most of all, have fun with us. The event venue is CBS (Solbjerg Plads 3, Frederiksberg), where the main event, hackaton and workshop take place from 10:00 until 16:00. The participation is free of charge, because we believe in open knowledge sharing and empowered citizenship. We look forward to see you at Copenhagen Open Data Day 2014!

poster odd cph_25

]]>
http://dk.okfn.org/2014/02/17/open-data-day-2014-i-kobenhavn/feed/ 1 63