Joseph Kiniry

70.622 nordmænd stemte over internettet - men sikkerheden får kritik

As soon as Norway's code drop happens DemTech will be doing some analysis. From what I have heard, given that we had observers at the election in Norway, the problem came down to a single JavaScript statement being put in the wrong place in the code (initialization inside vs. outside of a loop). This needs to be confirmed by looking at the actual code in question. Regardless, it shows how a single error in a software system of over 250K lines of code can violate the most fundamental principles of the democratic process.

2. oktober 2013 kl. 13:17
Mystisk fejl hærger Danske Banks netbank

Indeed, in my experience, the use of Ghostery causes one to be unable to even click on the "login" link on DanskeBank's website.

Joe

2. oktober 2013 kl. 11:22
Unittests af vores e-valgssystem kan vente

We'll be adding a snapshot of the system to our DemTech evoting systems repo this week too. https://github.com/demtech/evoting-systems

We have performed audits of several released evoting systems so far: Norway's system, Scantegrity II, The Netherland's KOA system, etc. I'm certain we'll see the same thing with the Estonian system as we have seen the world-over: poor software engineering practices, little-to-no validation (unit/system tests), no verification, no traceability to legal requirements, etc.

I expect to find all the same here, despite the architect of the Estonian system's declarations in public, and to me personally, that the system is fantastic, needs no review, well-tested, etc.

Coincidentally, I'm presenting a paper on system testing evoting tally systems at a conference this week in Surrey. If you are interested in how this should/can be done (generating system tests via model checking/finding), have a read: http://scholar.google.com/scholar?hl=en&q=Formal+Model-based+Validation+for+Tally+Systems&btnG=&as_sdt=1%2C5

14. juli 2013 kl. 11:16
Datatilsynet bekymret over datalæk: Rigspolitiet skal stå skoleret

Note that pentesting does not guarantee a secure system---it can only help raise confidence and do more accurate risk analysis, but is very subjective based upon the skill of the pentesters and the amount of time they have to do the analysis.

Sorry for being so quiet lately on all of these matters, Version2 readers, but I'm on paternity leave in California. ;)

Joe

12. juni 2013 kl. 21:10
IT-professor om fremtidens NemID: »Det handler om gennemsigtighed«

Allan,

Your more fundamental complaints about the social, political, and technical foundations of NemID's architecture resonate with me. I, too, think that there are fundamental problems with what we use today. In the short term, I try to objectively critically analyze what we have, make recommendations on how to improve or fix it in the short-to-medium term, but never compromise on the long term, which is to show where I (and others in the security community) believe we should go to focus on citizen privacy and institutionalized flexibility when it comes to all digitalization services, not just identification/authentication services like NemID.

Rolling back the accidental or purposeful surveillance society we have today in many western countries is difficult, but we, the writers and readers of Version2 should be at the forefront of that dialog. I believe we must do so by both providing a level-headed critique of what we have, but also constructively proposing (or, better yet, demonstrating) what we should have in the future. This is exactly why I work on analyzing public sector projects like NemID, NemLog-in, the CPR system, and others.

Joe

8. maj 2013 kl. 15:01
IT-professor om fremtidens NemID: »Det handler om gennemsigtighed«

Peter, your comments on the advantages and disadvantages of relying upon third-party identification and authentication services are correct. Precisely characterizing the security profile of an authentication service that relies upon such n-factor authentications is troublesome, but generally reasoning about whether or not the introduction of such features helps or harms security and privacy is possible.

The problem is compounded by the traditional tradeoff between security and convenience, especially if authentication is a context-sensitive thing (e.g., whether or not you are using your typical home machine, the IT sophistication level of the user, authentication for the disabled, etc.). In general, mandating a particular baseline of security (i.e., corporate or public-defined risk) is sensible. Disallowing IT savvy users or specific principles (primarily corporate, but sometimes individuals) from raising their security level seems to be working at odds with the goals of a mission-critical security system like NemID.

Best, Joe

8. maj 2013 kl. 14:56
IT-professor om fremtidens NemID: »Det handler om gennemsigtighed«

Hello readers,

Your comments above are good ones. The solution that we are prototyping is meant to seem like a drop-in replacement for NemID but is actually quite different behind the scenes. E.g., no public interface to witness DDOS, no mandatory private key delegation, no Java on the client side, no mandatory keycard, and we introduce n-factor authentication to leverage other public authentication services. We'll announce a demonstration of it and our NemLog-in replacement later this Summer, and both projects will be up on GitHub soon.

Best, Joe

5. maj 2013 kl. 22:48
Sådan hjælper du udviklingen af open source valg-software

That's a cut-and-paste error.

2. maj 2013 kl. 16:51
Hjælp forskerne med at gøre dansk valg-software open source

Feel free to ask any questions you like about our R&D, development processes, methodologies, and tools, etc.

The development we do focuses on case studies to show off advanced concepts, tools, and techniques for mission-critical and safety-critical systems. Everything we do is fully Open Source, and not just the final product, but everything that we do every step of the way.

With regards to the above comment about requiring IT firms to release the source of their commercial project: This has been tried in several countries and has rarely succeeded. Traditional IT consultancy firms are unfortunately, frequently unwilling to lift the curtain on the way that they work and the quality of their software development.

Additionally, as we are advocating that election software of any kind should be developed with a focus on security and correctness, many traditional vendors do not have the expertise necessary to fulfill the requirements we recommend to government(s). This means that only "high-end" companies would be able to bid on public tenders, would charge more than the standard rates of local firms, and do not have existing relationships with the local or national government. This means that they are not on various "positive lists", and thus are sometimes disqualified from bidding in the first place.

In the end, I strongly advocate that systems developed for public elections should be written by the public, not by private firms in a closed, proprietary fashion. Our role, as researchers at a university in this space are to provide guidance, training, support, and do basic and applied research. But remember, I am a member of the public too, so this is one way that I contribute in a concrete way to the public good.

Best, Joe Kiniry

2. maj 2013 kl. 13:32
Sådan hjælper du udviklingen af open source valg-software

Please feel free to ask any questions you like about or R&D, current systems under development, development processes, methodologies, and tools!

Joe

2. maj 2013 kl. 13:22
Har du ikke lært at debugge kode?

Peter,

I think you missed the courses that I teach. Readers have also commented on other courses that do explore these topics, but perhaps not as aggressively as you and I would like (e.g., the Advanced Software Engineering course at ITU or the basic Software Engineering course here at DTU).

See, e.g., my BSc course "Analysis, Design, and Software Architecture with Project" (e.g., https://blog.itu.dk/BDSA-E2011/) and my MSc course "Advanced Models and Programs" (course descriptions appended below).

A huge problem with the course listings and web pages at both DTU and ITU is that they are not public by default—a practice I've been fighting for years.

Some of us take these topics very seriously and we force students to dive into large-scale systems software engineering. My students learn tools and techniques used in companies that work in safety- and mission-critical systems but are equally applicable to "normal" software development.

Joe

Prof. Joseph Kiniry Head of Software Engineering Section, DTU

Name of course (English): Analysis, Design and Software Architecture

Intended learning outcome:

After completing the course and its project work students must:

  • be able to describe and apply object-oriented methods for analysis and design,
  • explain the principles of software architecture, including the variety of common architectures and design patterns and their use,
  • understand and be able to execute all the primary facets of software development within software engineering including analysis, design, implementation, testing, validation, and verification,
  • be able to use the common tools of the domain including configuration management, build systems, test frameworks, and version control,
  • be able to document the analysis, design, and software architecture of large systems through the use of common standards for documentation including UML, BON, Javadoc, C#'s documentation tools, etc.,
  • be able to continuously change (re-factor) a software system through adjustments in its architecture or refinements in its configuration,
  • be able to construct useful, coherent, large-scale systems of up to approx. 10 KLOC in size in the C# programming language, including the ability to perform system and domain analysis for a given problem, propose an appropriate software architecture, write a system specification and its implementation, and validate the implementation against its specification.

The design and implementation of such an application may include the use of advanced OO constructs such as generics, callbacks, delegates, events, aliasing, etc., advanced data types and algorithms, the use of third-party APIs and frameworks, distributed systems constructs including sockets, streams, remote procedure calls, concurrency constructs such as threads, semaphores, monitors, messages, tasks, etc., graphical user interface toolkits, and databases.

Content:

  • object-oriented analysis and design using a modeling language such as UML or BON
  • software architectures and design patterns
  • principles of software engineering
  • the C# programming language and the .NET platform
  • advanced programming in an OO language

Name of course (English): Advanced Models and Programs Intended learning outcome: The course has two major parts as detailed under course form below---a regular course followed by a project.

After the course, the students are expected to be able to:

  • describe relevant concepts within the themes of the course,
  • account for the practical applications of the covered constructs, and
  • compare selected techniques and constructions within a single of the course themes.

On the basis of the project, the students are expected to be able to:

  • apply relevant methods and techniques in the chosen project,
  • argue for the overall design-decisions in, and correctness properties of, the project, and
  • relate the project to the underlying theory.

Content: The subject of the course is programming language foundations and technologies, with special attention to advanced technologies that are likely to influence software practice over the next ten years.

The contents of the course is structured in four themes.

  1. Axiomatic Reasoning about Models and Programs

In particular, their pragmatic use in program design via model-driven development, implementation, logging, validation, and verification.

The material and structure for this part of the course will likely come from Benjamin Pierce's Software Foundations course using the Coq proof assistant.

  1. Foundations of Type Systems

In particular, their pragmatic use in model-based programming as evidenced in type-level annotations like those used in Java (JSR 308 and JSR 305), C# (like those supported by ReSharper, Code Contracts, and PEX), and Eiffel (via the Eiffel Information System, or EIS).

The material and structure for this part of the course will likely come from Luca Cardelli's notes and theTwelf system.

  1. Rigorous Software Engineering

In particular, the concepts, tools, and techniques used for analysis and design (BON and rigorous variants of UML, like those supported by Artisan Studio) in the Java (the Java Modeling Language, JML), C# (Code Contracts and PEX), and Eiffel programming languages and their environments (Eclipse, Visual Studio, and EiffelStudio).

  1. Design and Development of, and Reasoning about, Modern Concurrent Systems

In particular, the concepts, tools, and techniques used in Java (threads and java.util.concurrent), C# (threads and various .Net frameworks covered in the PPCP course from Microsoft Research and others), and Eiffel (threads and SCOOP).

23. april 2013 kl. 14:42
Disse 7 eksperter skal rådgive politikerne før afstemning om e-valg

I'm aware of Ivan's work in the area and he and DemTech do keep in touch.

28. februar 2013 kl. 12:47
Høring om eValg

Note that DemTech's response essentially says, "We think that objective trials are a good idea if you get the law in order by following these recommendations." Trials that are properly designed, strictly time limited, inexpensive, controlled, transparent can help reveal if evoting in some form or another is a good idea for Denmark. Unfortunately, as I stated in my lecture, the response from the Ministry does not give me, personally, faith that they will listen to our expert advice, and thus I, personally, feel that the law as it currently stands should be rejected.

Now, if they do now listen to our advice because of the debate in Parliament, then sure, lets do trials.

I'm sure the vast majority of Version2 readers have a prediction about how those trials will end. But I, as a scientist, will do my best to remain objective, independent, and transparent, helping those running the trials to the best of my ability as a public official and expert in evoting. Then we can all evaluate the evidence and give our formal recommendation based upon that evidence.

That's how science is done and that's how evidence-based decision-making should work in government.

8. februar 2013 kl. 21:31
E-valgsforskere kørt over af travle politikere: Vent dog til vi er færdige

I have nearly twenty years experience in running large IT projects, both in industry and academia. Suffice to say we know what we are doing on that front. Feel free to look up my LinkedIn profile for evidence.

6. februar 2013 kl. 14:32
Danmarks førende e-valgsforsker: Forkast lovforslag om e-valg

Let's say 100s of billions of dkk on the conservative-side then. I do believe that, when all is said and done, it has or will get into the trillions.

4. februar 2013 kl. 17:54
Danmarks førende e-valgsforsker: Forkast lovforslag om e-valg

Thanks for the coverage and comments!

The HAVA costs (estimated at over $2B USD ~= 10B dkk) are only the tip of the iceberg of total national costs in the U.S.A. Per-state estimates for the further costs of experiments, equipment, storage, maintenance, training, (re-)certification, salaries, overtime, lawsuits, etc. are usually in the range of $10s M per state, and we have 50 states, so...you do the math.

4. februar 2013 kl. 17:20
eValg: Lyt til en expert

Here is the recording:https://c.deic.dk/p4nnxbkr1di/

Thanks for coming everyone, particularly Poul-Henning!

We had a completely full house (perhaps 200?) and another 37 watching online.

2. februar 2013 kl. 00:01
eValg: Lyt til en expert

@Kim Jensen

You'll note that no where do I or DemTech suggest that evoting should be used for national elections. Instead, we ask whether or not this makes sense at all by posing a hypothesis.

The point of a scientific project is to propose a hypothesis and then objectively, rationally, test that hypothesis and accept or refute it. DemTech's hypothesis in a nutshell is that "It is possible to modernize the Danish electoral process without loosing the trust of the voters." A completely acceptable outcome of DemTech is to say, "No, it is not possible to modernize the Danish electoral process using computing technology." I'm cool with that, and I might even have an opinion about where I think the analysis will lead, but as a scientist, I'm going to look at the evidence, not speculative subjective fiction.

If you listen to my talk next week you'll learn about my personal perspective on the topic, rather than the position of the DemTech project as a whole. DemTech represents for voice of over a dozen researchers, many of which are not computer scientists, but instead ethnographers, political scientists, experts in democracy and elections, anthopologists, etc. Only a few of us are computer scientists and logicians.

25. januar 2013 kl. 11:25
eValg: Lyt til en expert

Hi Joseph</p>
<p>Thank you for commenting here. Always easier when we can talk directly to the parties involved.</p>
<p>Couple of questions:</p>
<ol><li>You say that you were involved in hacking the Dutch systems. Was that what we see in this video: <a href="https://www.youtube.com/watch?v=sSsyYKgwnVk">https://www.youtube.com/wa…; ?
Rop Gonggrijp is the speaker.</li>
</ol><p>2)Will the DTU lecture be streamed or recorded? I can not attend :o(</p>
<p>3)What do you say to the people (like me and Kim Jensen) who believe that we should not help the government with creating a eVoting system, because then they can say "look, IT people created this and therefore it is safe to use"?

Hi Flemming,

Thanks for your question.

  1. Rop is a friend and likely to be a member of DemTech's Scientific Advisory Panel in the future. Currently our "technical" member is J. Alex Halderman.

Rop and his colleagues work on hacking the Nedap machines took place a couple of years after my research group's work on defeating the proposed remote voting system "KOA". He was uninvolved in that effort; it was only pro bono work I did with two colleagues at Radboud University Nijmegen. The aforementioned paper describes what we did and what we could do under the constraints we were under (i.e., we could not modify election data, even during the testing, as it would violate national law and potentially get me kicked out of the country).

  1. The DTU talk will be streamed for up to 60 viewers and will be recorded and available henceforth.

  2. The decision about whether or not I should do something was really difficult one for me that I made back in 2003.

On the one hand, I completely understand the thinking behind saying "this is a bad idea and I refuse to participate".

On the other hand, as we have witnessed for fifteen years now, someone is going to build systems, and those systems are, in general, simply stated, terrible. They witness horrid software engineering and they are proprietary and closed-source. So if a system is going to get built for how one might digitize a part of the election process, then I think that it is a good idea that a team that is internationally-recognized experts in rigorous software engineering, formal verification of safety- and mission-critical systems, and information security is exactly the right people to experiment and advise governments.

Note that the systems we have built in the past are not full-blown evoting systems. I have completely stayed away from kiosk-based voting systems and focused entirely on counting votes correctly. Please do not give me credit to problems I have not tried to solve!

Another nice data point is that, while I can build verified, Open Source demonstration election subsystems (like tallying), I can continue to advise governments that it is a Bad Idea to charge down the patch of digital elections. I think history shows that this can work. After all, my team has built demonstration verified software for election subsystems while performed security analyses on existing commercial systems purchased by governments and while being an expert advisor to the government in a transparent fashion and those governments (Holland and Ireland) have consequently decided to do the right thing and shut down their evoting experiments and ban evoting by law. Come to my talk to hear more about this.

Many of my colleagues do take the position that you should only criticize and not propose any alternatives. Many researchers only propose theoretical systems, new cryptographic schemes, etc. but avoid writing any software or building any systems, either because they do not have the skills or think it is bad idea to make such systems available at all. Of those that do build systems (e.g., systems like Punchscan, Scantegrity, Helios, Prime III, Prêt-à-Voter, Wroclaw's work, ourselves and others), only my group makes guarantees about the correctness and security of our systems. Everyone else says "we make no promise that our system does anything right at all".

Finally, to help discount any conspiracy theorists out their wrt DemTech, everything that our project does is in the public view. All of our software, papers, and communications are transparent and open. Any member of the public can join, look around, look at the minutes for every meeting we have, and even ask for a snapshot of all of my DemTech email messages, and the messages of every member of DemTech.

25. januar 2013 kl. 11:10
eValg: Lyt til en expert

Representatives from the government, the municipalities, and media will attend the lecture. So yes, we have the ear of the government, both via politicians and bureaucrats.

24. januar 2013 kl. 12:44