Meta Prediction

Abstract

This is a prediction on AI and how it relates to the AutonomI network (previously known as the SAFE network). Some people will hate this, but I hope everyone at least takes notice.

Observation

AI/LLMs are here

AI or LLM or Neural nets are here. What is means, how close to AGI/SGI or whatever is irrelevant for this discussion.

We can talk to computers and they can talk back! We have a non biological partner, no matter how smart or dumb, that we can speak with. It’s here and getting better at communicating, better than any animal or other biological creature, it is communicating.

So with that behind us and no proclamation of advancement or doom, we are where we are. Things change from here, and change in dramatic ways. This we know.

Still we wish to define intelligence and consciousness etc. Not me. I just work with what I have and can see and interact with. It’s here.

This is coming in several forms, agentic, multi-modal, fine-tuned, RAG (Retrieval Augmented Generation), as well as differing model sizes and capabilities. The world is changing and that will not slow down.

Prediction

What we end up with, nobody knows, but what I strongly believe is:

  • Direct hardware access will be their API, no OS level, not IOS not Android, but direct hardware access.
  • Apps and browsers as we know them will be bypassed
  • Possibly all, but at least most, human written code will be replaced by direct human to machine capabilities
  • Files and directories or folders become antiques that only a select few refer to. Who wants to search manually through millions of files for knowledge when you can just ask
  • Humanoid robots will enable the physical world to be an extension of our computer and likewise, controlled by us via our AI. Essentially, we all have not only computer and knowledge skills, but these extend to the physical world.

Ok, that’s actually a lot and each item could be a massive post on its own, here we need speed and clarity, not pontification and back rubbing.

Effect on AutonomI

These changes to our computer interaction move the goalposts for sure. Rather than storing just any data and creating file structures, web sites, browsers and so on, Autonomi, must now protect humanity directly by protecting our own personal AI.

Centrally controlled AI is obviously evil, same with robotic control. The intimate relationships we will all have with our data/knowledge/AI must be protected and that means secure private and personal to each human, regardless of where you are and how you connect, using whatever device. This part is vital and without it society will struggle to maintain individuality. That individuality is how we evolve and innovate, in fact it is how we live. Losing that could lose all of us.

This is Autonomi’s greatest offering. The ability to secure and make available your AI conversations.

Which models will Autonomi provide?

The network should make any locally runnable open source model available to every person on the planet. It can take the human conversations and store them securely, ensuring only you have access to them. These are your property and private thoughts. They must travel with you and be available from anywhere, only to you.

Agents

Agents are perhaps the most simple and most powerful of all todays software tools. They are not understood my most, but represent a massive opportunity for us all.

An agent is basically a set of configurations we apply to an LLM to focus it on a specialised task. For instance we can have agents that represent

  • CEO
  • Software Developer
  • Software Q/A
  • Software tester
  • Web site builder
  • Customer support

What we do is simply supply basic information to make this happen. This info can be as simple as

{
Name: CEO
Skill: Build teams and have final say on deliverables
BackStory: You are a CEO of a sucessful sofware company ...
LLM: <any LLM we wish goes here>
}

This small piece of text is all we need to create an agent. Some LLMs may be tuned for such actions, but even a generic one can be guided as an agent as above.

The power of agents

The strength of this approach is we no longer simply ask a question from a single AI and get an answer, right or wrong. Instead, we ask our team of agents. They delegate tasks, use tools, such as web search, stock checker, spreadsheets and so on. They check each other’s output and can demand it’s not good enough and their task has to be redone.

So now we have planning, recursion, long term workloads and much more. This allows us to take manly LLMs, combine them in clever ways and have them perform and check tasks for us. This basically allows humans to create virtual companies, project teams and much more. Even with their own customer support, feedback and upgrade mechanism to continuously develop their commercial offering. All without human intervention.

Or we have teams to control our house, workshop, education and more. Really, there is no bound.

Autonomi and agents

This is a match made in heaven. Autonomi allows us to publish an agent and update that. Others can use it and lock to a version of our CEO (for instance) or always use the latest (Autonomi puts you in control at all times).

This is a totally new way to share and decentralise community created content on a massive scale. As people gravitate towards the person who creates the best CEOs or Software dev agents, then folk will use this and pay the creator. It’s quite magical.


Posted in Uncategorized

Simplicity, complexity and creativity

## Introduction

In recent years I have studied to help create the SAFE network, along with brilliant Engineers. It’s been a journey and one of a typical creative endeavour. I have always figured AI or at least evolutionary programming would be a big part of that. From simple perceptrons through genetic programming and neuroevolution (NEAT in particular). Recent LLM success has certainly added an intriguing angle to these studies, mainly the notion that copying a brain may not be the best route to “intelligence”. This is fascinating, but more pleasing is the simplicity of the LLM models.

## Simplicity

Recently the tech world was stunned by generative AI models, in particular GTP, then the rest of the world followed as it became accessible via CHAT GTP. What is remarkable to me though is the simplicity of “guessing the next token (word)” when we introduce context (attention is all you need). This shocked many of the “fathers” of AI to their very core. Not in how capable it was per se, but how simply it managed to get to human level capabilities.

We always thought creativity and our intelligence separated us from machines, it turns out that may be the thing we have most in common, at least for now. That should shake everyone!

## Creativity

Creativity is an interesting concept. It involves many steps, imagining things, testing things that might work, but probably don’t and then forging a path to make the previously impossible possible. What is imagining, if not hallucinating (As the AI world has come to know it). So hallucination is good, it’s required for creativity. Stop any creative process mid way and you will have a hallucination like outcome.

Creativity also requires research, crazy ideas and something rather interesting, unbiased and limitless patience from partners. Innovation tends to come from individuals, big stuff like electricity, magnetism, relativity, QED and more stemmed from individuals. Partners are difficult as they introduce ego (their idea must be used) and they are not ready and willing when the lightbulb moments strike (late night, very early morning etc.). Having a near human level LLM with broad expertise in many areas opens doors that just did not even exist last year.

## Complexity

Professionals love complexity. It seems to boost the ego of practitioners in many areas, but nature hates it. When I say complexity I mean very narrow areas of knowledge presented in large and confusing language with many graphs and weird Greek lettered formulae or work that is so complex it is only understandable by a tiny percentage of experts. I see this as detrimental to the evolution of our species/knowledge.

Nature is complex, our brains were thought to be complex, but that complexity may be much less so in my opinion.

I think many “experts” will take modern AI and make even more complex papers and formulae from it. That will boost ego and do little for humanity in most cases. Complexity is really something we use when we cannot understand the fundamentals of life and the interaction of those fundamentals. Back to “attention is all you need” and really think about that, it’s fascinating.

The complexity of understanding the coordination of many simple artefacts to produce something as powerful as a human or as successful as an ant colony is incredibly difficult, as it involves extreme deep thinking. It also requires, as we also see in nature, many iterations with tiny changes each time. These systems will not work right until the last tiny change, then they do. At this point though, another tiny change may break them. Interestingly, when they work it’s unlikely they can be described in any detail which is where Engineers get nervous, but they shouldn’t as they are themselves actually built in this way.

This, to me, is the real area of complexity. It’s very much like a neural network that works, we cannot describe it in detail, we cannot describe and ant’s make up in detail, at least in relation to it’s function as part of the colony. No amount of Greek letters and advanced mathematics seems to be appropriate. Many people may disagree, but this is where I feel real advances can be made, should we be able to teach this kind of thinking at a deep level.

## Context with respect to LLMs

A slight deviation here. Context is important when guessing the next thing. This is what gave modern generative AI (LLMs in particular) the jump we now see. However, it’s more than context in relation to the training, i.e. the model itself. That training data is many hundreds or more exabytes of raw data that is encoded into some vector representation of context or distance between tokens (attention).

## Why are the current AI’s not that smart in some ways

This is again context. The training data may be perfect (it’s not, but assume it is), but the context from us to the AI is incredibly limited. It’s a small window in which we have this conversation. That window may allow only a few hundred messages in a conversation, and even then it’s vague in the middle of the conversation (technical issue). The context on our side, the human, is limited.

Going back to creativity, it’s a back and forth conversation, hallucinating, testing and changing. With limited context windows, current LLM’s are crippled. When we add to that the “one shot” type conversation where the AI starts the answer almost immediately, it means we get a step-by-step look at the creative process. When we see a hallucination we stop and say “hey this is broke” but I don’t believe it is. I think we stopped a creative brain before it was able to even take step 2 of a multi-step process.

There are projects to increase the human side context and perhaps even make that unlimited. There are also projects to allow the AI to iterate, but they are in early stages. This problem, though, is not complex at all and projects like memory-gtp, multi stage prompting and agents are already making headway, if not already there in alpha form.

However, I believe the AI will be able to take our individual (and hopefully private) life experience, but importantly it will not be complex models and algorithms that achieve this. It will be as simple as we have today. So simplicity is key here, it cannot be stressed enough the simplicity of these systems is what is startling us all.

## Conclusion

The SAFE network has taken a massive turn recently and gone back to its roots. These are founded in simplicity, a natural system based on the functions of an ant colony. That sophistication through what looks like a complex series of actions we cannot understand, much like the neural networks in these simple LLMs. Each neural network node is super simple too, it’s just adding them together that gives us complexity that leads to sophistication. However, we cannot understand the simple reactions when they are added together, and neither should we.

It is the simplicity of the individual components communicating that gives us not only natural capabilities, but the ability to evolve systems. Complex mechanisms themselves are not scalable, understandable or natural.

The SAFE network, likewise, is based on simplicity and recently the team have made this even simpler, but with dramatically powerful consequences. Hard to explain, but easy to see the outcomes.

This is my excitement in AI, not the ability to create complex algorithms and be applauded as some kind of super genius, but to dive deeper into creating systems that comprise super simple steps to give us even more capable systems. Then, for us all to really understand the connectivity of all things, at a fundamental level. It’s not about genius through complexity, it’s about genius through simplicity.

AI will allow us to create and evaluate simple interconnected systems in a way we never could. While others create complex components, I am ecstatic to look deeper and deeper into the cellular automata approach (again) ,enhancing creativity and advancing our species. So more ant investigation and much more fun!

Posted in Uncategorized

Shaping the Future: From Data to Knowledge in a Privacy-Centric World

Shaping the Future: From Data to Knowledge in a Privacy-Centric World

In the rapidly evolving digital landscape, the traditional paradigms of data management are being challenged. As the digital footprint of individuals and organizations continues to expand, there arises a critical need to shift from data-centric to knowledge-centric models. This post delves into key areas of this transformation: Digital Advertising, Personal Data, Shared Data, Secure Knowledge Management, and Nano Payments.

1. Digital Advertising Reimagined

The current digital advertising model primarily focuses on capturing consumer attention. However, as Artificial Intelligence (AI) advances, a paradigm shift towards value-based advertising is imminent. Consumers, assisted by AI, will seek the best value, pushing advertisers to offer competitive pricing and quality assurance. The shift to value proposition could foster long-term relationships with consumers, promoting a culture of transparency, fairness, and consumer satisfaction. This shift reflects a mature understanding of consumer needs, moving from mere attention capture to delivering real value.

2. The Evolution from Personal Data to Personal Knowledge

Transitioning from data to knowledge represents a novel approach to personal data management. Technologies like machine learning and vector databases can extract and compactly represent knowledge, minimizing data storage requirements and enhancing privacy. This transition has the potential to impact various industries significantly, offering a granular level of control over the information shared and fostering a sense of ownership and control among individuals.

3. Collective Wisdom: The Future of Shared Data

The concept of sharing knowledge instead of raw data opens new vistas for collaborative research and innovation, especially in sensitive fields like healthcare and finance. Cryptographic techniques like Homomorphic Encryption and Zero-Knowledge Proofs ensure privacy and authenticity in this knowledge sharing paradigm. A novel proposal further enhances privacy by introducing an anonymized response mechanism, where entities deposit their answers to questions in a designated “dropbox”, decoupling the identities of questioners and responders, thereby fostering a more privacy-centric knowledge sharing ecosystem.

4. Secure Knowledge Management: Harnessing the SAFE Network

The SAFE Network, with its decentralized, privacy-focused approach, provides a solid foundation for managing and sharing knowledge securely. Its principles align well with the idea of giving users control over their data and knowledge. The integration of secure data management on the SAFE Network with knowledge extraction techniques could create a powerful platform for managing and sharing knowledge securely, paving the way for a new era of secure, user-centric digital ecosystems.

5. Nano Payments: Enabling Autonomous Transactions

The convergence of the SAFE Network’s nano payment processing capabilities with personal AI technologies could usher in a new era of financial transactions. This scenario could enable real-time payments for real-time services, micro-subscriptions, and decentralized finance (DeFi) applications, ensuring a seamless user experience while adhering to financial regulations and standards.

Conclusion

The technological and conceptual shifts discussed in this post paint a picture of a future where individuals have more control over their data and knowledge, where new business models become viable, and where privacy is a fundamental pillar, not an afterthought. As we transition from a data-centric to a knowledge-centric world, the opportunities to reshape the digital landscape in a privacy-centric manner are boundless.

Posted in Uncategorized

The SAFE Network: A Deep Dive into the Wisdom of Natural Systems

Introduction

In the ever-evolving landscape of technology, the quest for perfection often leads us down the path of complexity. Engineers and cryptocurrency advocates alike are enamoured with the idea of “provably correct” systems, which hinge on the concept of network-wide consensus. However, as physicist Richard Feynman wisely noted, “If you cannot find it in nature, then it’s wrong.” This blog post aims to explore the inherent wisdom of natural systems, like ant and bee colonies, and how their principles can be applied to create a more secure and efficient SAFE Network.

The Mirage of Network-Wide Consensus and Total Order

The Engineer’s Paradox

Engineers, especially those with a strong mathematical background, often find themselves in a conundrum. The pursuit of provably correct systems, whilst intellectually satisfying, can lead to significant delays in practical implementation. Moreover, these systems often rely on unnatural constructs like network-wide consensus and total order, which are not found in nature.

The Bottleneck of Total Order

One of the most unnatural aspects of today’s decentralised networks is the concept of network-wide total order. This total order creates a sequence that effectively paralyses individual nodes, forcing them to wait for the majority to agree on the last decision before they can act. This creates bottlenecks and significantly hampers the network’s ability to react and adapt quickly.

The Cryptocurrency Quagmire

The cryptocurrency world is another domain where the allure of provable correctness is strong. However, this often leads to complex systems that require network-wide consensus to function. This approach, whilst secure on paper, is not necessarily the most efficient or natural way to build a resilient system.

The Wisdom of Natural Systems: Lessons from Ants and Bees

The Principle of Collective Altruism

In natural colonies like those of ants and bees, each individual works for the collective good. They don’t create food; they gather it. The focus is on the collective well-being rather than individual gain, and this is achieved without any form of centralised control or consensus.

Natural Security Measures: Identifying Bad Actors

Nature has its own security mechanisms. In a colony, individuals that deviate from the norm in behaviour or appearance are quickly identified. They are not punished but are simply ignored, effectively neutralising their impact on the colony. (I should add active vandals or attackers are swiftly dealt with from nearby ants. Those ants do not wait for teh whoel colony to agree on the action they will take).

The Natural Incentive in Cryptocurrencies: Self-Interest as a Security Measure

The Strength of Bitcoin and Other Cryptocurrencies

Whilst cryptocurrencies like Bitcoin are often lauded for their complex algorithms and mathematical rigour, one of their most potent strengths lies in a simple, natural mechanism: individual self-interest. The desire for monetary gain serves as a powerful incentive for participants to play by the rules.

The Cost of Cheating

Cheating or attempting to defraud the system could offer short-term gains but comes at a significant cost: the potential collapse or devaluation of the network from which the value is being extracted. Participants understand that for them to continue benefiting from the network, the network itself must survive and thrive.

The Balance of Power

This natural mechanism is finely balanced by the network’s need to grow quickly enough to fend off vandalism attacks. Bitcoin effectively manages this through its Proof of Work (POW) system, which ensures that the computational power required to cheat the system outweighs the potential gains.

The SAFE Network: Embracing Natural Principles

The Network’s Limitations: No Creation of Raw Materials

One of the most striking features of the SAFE Network is its inability to create its own raw materials, or data. This limitation is not a weakness but rather a reflection of natural systems. Just as ants and bees cannot create food but must gather it, the SAFE Network relies on external data inputs to function. This ensures that the network remains a part of the larger ecosystem, interdependent and not isolated.

The Power of Individual Nodes: Collective Behaviour without Consensus

In the SAFE Network, collective behaviour emerges from the actions of individual nodes, much like a bee colony deciding to swarm. This is not a result of network-wide consensus but rather the cumulative effect of individual actions. When enough nodes follow a particular path, the network appears to make a decision, but this is the “will of the people,” so to speak, rather than a formal rule or strict order. This mimics the natural world, where collective actions like swarming or foraging are not dictated by a central authority but emerge organically from individual behaviours.

The Perils of Total Order: A Lesson from Nature

Imagine a herd of gazelles being hunted by a predator. If each gazelle had to wait for the last one to run first, and for the majority to agree on which one was last and which one is next to run, it would be a feast for the predator and an extinction event for the gazelles. Natural systems empower individuals to act en masse, allowing for quick reactions and adaptability, a feature conspicuously absent in systems that rely on total order.

Conclusion: The Path Forward

As we venture into the realm of decentralised systems, it’s crucial to heed the lessons that nature offers. Our planet has spent over 4 billion years perfecting systems that are both efficient and resilient, without the need for unnatural constructs like network-wide consensus or total order. By embracing these principles, the SAFE Network can become a beacon of security and efficiency in the decentralised world.

By aligning our technological endeavours with the wisdom of natural systems, we can create a SAFE Network that is not just provably correct, but also “naturally correct.” After all, nature is the best engineer there is.

Posted in Uncategorized

Beyond a copy of the Internet.

We know Artificial Intelligence (AI) is coming, we see the Internet of Things (IoT) happening.

planes-trains-automobiles-original.jpeg

We know trains, planes and automobiles will become autonomous. This is not news. We know data is key to modern industries, we know robots will communicate, we understand and accept securing all of this will be a nightmare. The consequences of failure could be cataclysmic. I will refrain from inserting the obligatory terminator graphic here.

We also know that companies, projects and devices need to not only communicate, but they also need to share information securely. This is another issue. If nobody, including the NSA, GCHQ, Governments or large tech companies can secure the information, who can? Not only that, but the holder has a wee bit more power than they should, especially if they control access. If it’s given to third parties to control, then it gets much worse.

We need a way to share information securely and occasionally privately. This data cannot be blocked, damaged, hacked or removed from the participants whilst they are in the group. The group must decide as a group on membership, no individual should have that authority. No administrators or IT ‘experts’ should have any access to a companies critical data.

Potential Solution

With the SAFE network developers have a ton of currently untapped capability. Here I will describe one such approach (of many) that will allow all of the above systems to co-operate securely and without fear of password thefts and/or server breaches.

SAFE has multisig data types (the machinery is there, but not yet enabled), these data types are called Mutable Data (MD) and can be mutated by the owners. A possible solution quickly becomes obvious.

A conglomerate starts with one entity, this could be a person or a company.  Lets call that entity ‘A’. Now ‘A’ creates a Mutable Data item that represents the conglomerate, which we will call ‘C’. ‘C’ is a mutable data type, this is a type with a fixed address on the network. It has a fixed address, but no fixed abode, so it appears on different computers at different times. Only the network knows where the data is. This ‘C’ then can be almost anything. Some examples might be:

  1. A list of (possibly encrypted) public keys that represent each participants ‘root key’.
  2. A list of (possibly encrypted) participants.
  3. An embedded program
  4. Any data at all (endless opportunities really)

Lets take a few cases of applications for different industries. Note that this is not specific application for these industries, it’s to facilitate global collaboration. Below is a snapshot of some possibilities, but even these are likely to be superseded by more innovative and better thought out approaches. However, these examples work. Each industry will later feature in a more detailed analysis.

1: The Autonomous Vehicle Industry

alessio-lin-208193.jpg

Photo by Alessio Lin on Unsplash

These vehicles should obviously communicate with one another and indeed there are moves to enforce collaboration. However there’s the problem of ownership as described above, but also an issue of corruption by the industry, such as altering documents and data (to match emissions targets for example).

Potential Solution

A simple solution here (which will repeat in many cases), involves the creation of a fixed network address (via a Mutable Data item in SAFE). This will be created by a single entity, say Ford for example. To collaborate they add GM, Tesla etc. as ‘owners’ of this item. With multisig this means ownership is the majority of ‘owners’. Now we have a data item on a network that is not owned by any of the companies involved, but editable by the majority of those owners.

With each ‘owner’ the data item may contain a list of Public Keys or Companies and this list allows us to validate items signed by those keys as valid from the perspective of this conglomerate of companies. So what can the companies do now?

The companies can now create their own data types, for example, another mutable data item that contains a list of manufactured vehicles public keys. Now any vehicle added to the company list can be confirmed valid by any other vehicle (or system) and the vehicles or systems can now communicate safely, and in a way that is cryptographically guaranteed.

The communications may now be encrypted between vehicles and may even include micro-payments for services such as shared charging in the future.

2: Robotic & AI Industries

alex-knight-199368.jpg

Photo by Alex Knight on Unsplash

Similarly to the above mechanism (although there might be no conglomerate or top level group involved), it would be simpler if robots followed a common route to information sharing. In this section when the reader reads ‘robots’ they must think of robots or AI networks.

The interesting thing here will be the potentially enormous quantity of information that robots could share globally. Imagine for instance that a robot maps out a room or learns CPR, or perhaps even the location of a good repair shop. Would it not be incredible (and obvious) if they could share that information securely and immutably with all the other robots across the globe?

Robots like humans will likely not know everything at all times, but instead wish to be able to tap into a much larger data set than they can hold. Importantly though they would want access to a more recently updated data set than they could hope to achieve singularly. Yes, even robots will not be omnipotent: like other species they will continue to be limited by natural laws to some degree.

There are several projects looking at ways to enable information sharing between robots, such as the recent(ish) EU project robot earth. But how do the robots know the data is from one of them and not a corrupt source? This surely will be important?!

So, here we wish robots to be sure of identity and – as with vehicles – have the ability to revoke (invalidate) such identities, or possibly even remove recent posts from the collective.

Potential Solution

A robot manufacturer can create a mutable data item. With this item each robot built will create a key-pair and the public key will be added as an owner to the Mutable Data (this can be made into a tree structure to hold billions of identities). The data element will be the location of further lists of valid robot keys, these may also be Mutable Data.

The secondary Mutable Data items above will contain many more robots keys. Each of these data items can now be thought of a mini-community. In these communities a delegate can be chosen to be one of the owners of the core Mutable Data item (i.e. the conglomerate one as above). This delegate will vote as per the groups instructions and failure to do so will mean the group (all owners) will remove that robot from the group and the community as a whole.

Now we have a mechanism where billions of robots can recognise the public keys of their peers. they can do this via any key exchange (ECDH etc.), or by using asymmetric encryption keys so they can encrypt data and communications between themselves. They can confirm the validity of any data posted (cryptographically signed by a robot) and that it is behaving as expected by the rest of the robots.

Now we have the ability for robots to learn and post results on a secure autonomous network. AI network can share neural networks suited for specific tasks, general AI may be aided in such scenarios with multiple networks that may be accessed globally and will not be able to be corrupted, whilst remaining validated at all times (many AI bots could rerun the network to prove results are legitimate). Re-enforcement learning and neuroevolution bots could share experiences from similar but not identical findings of the real world.

It could prove to be rather exciting to dive deeper into further posts. It is the sort of thinking that drove the need to create SAFE in the first place. The fact servers are unnatural and illogical is bad, but their susceptibility to corruption is much worse. These mechanisms require incorruptible networks, or at least the ability to recognise and control bad information. We need secure autonomous networks for some really exciting collaboration and advances. I would love to investigate and debate these opportunities with a wider audience. It’s not possible without this autonomous network though. This is why I’m so excited by the prospect of SAFE.

3: Healthcare industry

freestocks-org-126848.jpg

Photo by freestocks.org on Unsplash

Interesting thing about scanners and medical devices is that they don’t need to know your name!

A scanner cannot make use of your name. Today we can, at exponentially decreasing costs, decode your whole genome. Scanners for home use (see X prize tricorder prize) will improve dramatically and at increasing pace with significant cost savings.

Here’s a thought, instead of protecting medical records, why not just remove them completely?

Here’s another thought, instead of building more hospitals and staffing them, could we not let technology help, especially with ageing populations and more life extension programs working efficiently. Hospital/staff shortage could be offset with much better tools to do the job, by this I mean build testing equipment and use genomics and proteomics to reduce the need for hospitals and staff. The answer to hospital shortages, could well be to remove almost all of them, except for trauma centres and centres to provide assistance, such as maternity units etc.

With some of these ideas we could look to revolutionise healthcare and increase the health of the population with dramatically decreased budgets and much less human error. We could become much more caring and able to look after those who need help, and at the point of need, which should be the point of diagnosis and immediate delivery of solutions. With home diagnosis equipment it’s quite likely that in the future your devices could detect disease faster than you realise your antibodies are responding to it. Imagine never feeling ‘under the weather’ again?! Of course there will always be reasons to feel a bit down some days and up on other days, the human experience will always be relative to its context.

You can probably tell that I have a bit of a ‘bee in my bonnet’ on this one. It frustrates me to see people refused care, or drugs not being used due to cost. We can do better. If we could diagnose, deliver care and medication at home (ultimately we might be able to fix our own problems ourselves) then we can potentially revolutionise healthcare for everyone.

We now have the solutions above for the autonomous vehicles, robotics and AI behind us. Those tools are in our arsenal and we can proceed from here.  This is rather exciting to me and a huge win for humanity. So we know measurement devices do not know our name, we know that accurate scans can tell a lot about our current physique and detect anomalies. We know that genetic sequencing can accurately map our physical makeup and detect anomalies there. This means we could actually achieve a goal of instant medication and resolution to problems, but how do we get from here to there?

Potential Solution

We can already see how medical devices, like robots, can communicate with each other on an autonomous network such as SAFE. These devices can measure many things, including DNA, genome, fingerprints and much more. Therefore, they can link scans to identities, but they should see each ‘thing’ they scan as a unique set of atoms that are (hopefully) all connected correctly with a genome to ascertain its health. As a patient is scanned any anomaly will be treated. The treatment should be supplied or recommended by the scanner. When the treatment is applied, the scanner will scan again and be able to remember the condition and the treatment.

This gives the scanners a view of what works and what does not for the huge numbers of unique humans they’ll deal with. In short, the scanners can test treatments and measure outcomes.  With that information and the sharing mechanism described above in the AI scenario, they can take the findings from the whole human population to ensure the best possible treatment, based on massive sampling. In this manner the scans of a problem  can be combined with the relevant DNA/RNA strand to relate the scan to others of similar composition. That should mean treatments are continually improving. This continual improvement not only makes us all healthier with improved longevity, but also dramatically reduces the health budget. Hopefully that would mean health becomes universal, breaking down the gap between rich and poor, people and countries.

Conclusion

This post has attempted to give a tiny (minuscule) glimpse into the possibilities of autonomous, connected, non-owned data and computation capabilities. Many types of machine, intelligence and future invention could collaborate simply, freely and without corruption. There would be an initial requirement for humans to start such systems, but they should quickly become self-managing. This will mean there’s quickly zero requirement for human intervention, possibly apart from the Engineers who will provide updates as technology improves. Another post though will investigate this particular feature of autonomy in reference to data networking and beyond.

I hope this has been a useful exercise, or at least will help to probe the imagination and let us look at problems from a different perspective. In a secure and autonomous data network though, many of today’s issues simply evaporate. I certainly hope this post has challenged some people to think differently.

Posted in complex systems, MaidSafe, nature, Personal Opinion, Uncategorized

SAFE, use case. Honest data networks

markus-spiske-109588.jpg

Photo by Markus Spiske on Unsplash

Initially this “use case” is more like a “reuse case” to solutions that some blockchain based projects have promoted or implemented. This post will not name or directly criticise any project in the space, innovation is innovation and will always improve. We need to take step one, but we need to realise it is the first of many. I hope this post also encourages more people to dig a little deeper into this important area.

This first case I would like to discuss is where projects use a public ledger (blockchain) and claim to “publish” data and ensure its integrity, meaning it cannot be removed, edited or ignored in the future. This notion has also slipped into “private ledgers”, but in a very curious way. Let’s take a moment to explore the conundrum that covers many cases in today’s blockchain based projects.

Secure document records

susan-yin-324587.jpg

Photo by Susan Yin on Unsplash

There are a few projects making use of the immutability of the blockchain in interesting ways. One such way is to “make data honest” or prevent rewriting of history, others similarly promote governance or identification services and so on.

These ideas are a huge deal actually, they could be recording land transactions, medical research and much more, but several existing projects currently fall an obvious bit short of a solution at the moment.

To be clear, I do not promote or endorse governance or identity services, just my personal opinion for now.

So what’s the problem?

These projects store a data identifier on the blockchain. This is generally a “hash” (digital fingerprint). Ignoring collisions (which are unlikely) the problem is simply “where is the data?” And therein lies the issue, the conundrum, the unspoken fact that these networks do not “make data honest” as it’s not data, it’s only hashes.

So currently these networks/projects keep data honest only if you can somehow get the data and there is the rub. The blockchain network does not secure documents. It is brilliant at public transaction data though. In fact the data is somewhere else, not where it’s apparently “secured” (on the blockchain). It’s a mean feat to market such projects as secured historical records as they clearly are not. In fact at the core these projects only publish and lock data identifiers, not data. It’s like a passport showing up for an interview instead of the actual person!

Make it SAFE

In SAFE, or any secured autonomous data network (would love to hear of others), the problem just does not make any sense. The location of the data store is the location that also secures the data.

The data is the thing that is secured. It’s hard to say this without sounding overwhelmingly obvious, but securing data means securing the actual data. Securing an identifier (hash) somewhere that the data isn’t, just will not work. So in SAFE the data is already honest. The projects in this space can use this feature to achieve their goals much more completely. Some have already said they would.

Secure data networks

jingyi-wang-193838.jpg

Photo by Jingyi Wang on Unsplash

The next set of projects are the “decentralised networks” or “Internet 2.0 or 3.0” or whichever number is the highest in the scuttlebutt, or so it seems at times. Many go a little further than the document recording projects above, but not really. Again, many of these projects store data identifiers on a blockchain, but not the data. Many of these projects aim to compete with Amazon, dropbox and others.

This is a conundrum I really struggle to cope with. If the network is secure and the data on it is secure, then why store identifiers somewhere else on another network (blockchain)?  I really find it difficult to make sense of these dichotomies.

Incredibly from a decentralised perspective, some of these networks require a login to services using managed servers, and yes that is pretty painful. As we know these centralised points are both insecure and fail to protect our privacy. Some use clearnet (blockable, traceable etc.) services like DNS, NTP etc. I will not dive into the networks that allow service price setting by users, that again is not autonomous.

To us autonomous means truly autonomous, decentralised means truly decentralised (each requires the other). This is why it’s important to really figure out the details in projects that partially implement these concepts. We are better remaining focused on these facts instead of losing core concepts in amongst cryptic marketing messages.

Make it SAFE

So how do we “fix” these issues? An important step is to realise that if the network is secure then that’s where you store the data. It really is that simple. If the network is secure then it would also store your hashes/tokens/currency/coins, now that’s another discussion, but one that does need to happen. Adding computation to such a secure network then becomes a much simpler proposition.

To be clear, make it autonomous and then make it decentralised. Then we stand a chance of achieving security and privacy. Then we can all achieve that elusive freedom that the future demands of us.

Posted in bitcoin, MaidSafe, Personal Opinion

Data is the currency, literally!

freddie-collins-309833.jpg
Photo by Freddie Collins on Unsplash

In the last post, I discussed “the impossible network“, an autonomous network designed to protect the worlds people and their data.  Before moving on to use cases for such a network, I thought it required a little more clarification.

Many people have said that sounds like project … (insert many blockchain products that store data), however I think this could not be further from the case.  SAFE is an autonomous network for a start, I do not think any other project that manages private and public data claims this (private means it must provide some method of self authentication), but would love to hear of any that did. Never mind one with an inbuilt incentive mechanism.

The currency on such a network would obviously be data, but I do not mean that in some abstract sense, I am literally stating the currency in such a network is a data type. Not a separate network or an add on component, but a data type like any other on the network.  From a crypto currency perspective the network is the ledger, but in the case of SAFE that ledger is private and most certainly not an add-on, it’s an implicit function,

Another misunderstanding is related to projects that store some data identifier (e.g. hash) in a blockchain, but store the data “somewhere”. That “somewhere” could be some persons computer that they control, or some company servers etc.  This is the very thing we are supposed to be removing, Those pesky corruptible intermediaries. To us in MaidSafe this is a huge red flag and almost as bad as just not storing the data at all. If the data can be corrupted, or peoples access denied etc. then it’s simply not physically secured, there is no debate here.  If you store the data, then you have the identifier, so there is a dichotomy with those networks that is hard to reason.

SAFE stores data identifiers (in data chains) and also stores and protects the data itself, this is what farmers get paid to do. That is how we commercialise the world’s unused disk space automatically for ordinary people across the globe and provide the cheapest possible secure storage (and computation eventually) for everyone on the planet.

So if you are to remember one thing about SAFE, it is the network, not any humans, which protects our data, this includes controlling all costs, rewards, storage, access, communications and currency.

Now I will get on with use cases, I hope this makes the impossible network a bit more clear.  It sounds huge, but it’s actually much bigger than that 😉

 

Posted in complex systems, MaidSafe, Personal Opinion, safecoin

The Impossible Network

pexels-photo-210199

In recent weeks and months, the MaidSafe team have been very quietly progressing something quite amazing.  The dedication and commitment of the team is admirable, but the task is so great that we forget how huge the prospects are. Not only that we also at times forget to talk publicly about it. This will change I am sure, but in this personal blog I do intend to get the message across and try to really explain the potential here, it is quite astounding and can change our world, but it needs better understood, even by early adopters.

The Back Story

We have had several discussion in house recently about the SAFE network or the MaidSafe design. These discussion are surprising, we are thinking about what we are offering, not the vision, not the design and not the roadmap, these don’t change. The issue we have been discussing is one of perception and by extension imagination.

I had better explain this rambling introduction slightly better. As always some historical perspective helps. In 2014 we aligned with the crypto community, with similar drive and goals in many respects, however very different products and this is where the issue happens. We choose to define our goals as decentralising the Internet. Of course all the “it’s already decentralised” arguments aside it was a message we felt folk could follow. “We will do for data what bitcoin is doing for money” (again ignoring the side arguments about wealth, stores and all that jazz) was a mantra we used to drive home our message.

It worked and worked well, but with issues. Being us, we got to work, heads down, bums up designing, coding and testing. Nothing could stop us, we were and are focused on delivery of this “impossible network“. However, as society does, never letting a good message go to waste, the decentralised Internet message gathered pace and simply became overused and no longer differentiated us. Some projects even started calling miners farmers and such like, the confusion was not good for us, but we ignored it, mostly. Recently we have noticed we are actually perceived as another version of several projects. Initially we were compared to tor, BitTorrent, freenet etc. but now it’s any crypto project with storage or some networking. so there was a shift. Decentralised Internet became, use some crypto and (maybe) store data somewhere.

Not disrespecting any project here, this is our project I am thinking about and it’s, just like 2014, being considered just the same as project X, where X now is anything crypto with storage, where previously it was any project that stored stuff, regardless of crypto.  To be fair I have not deeply looked at many of these newer projects past the headlines.

So what’s the problem? Competition is great, it shows we are on the right track, or at least not going it alone. Well there is a problem and it’s our problem. The Decentralised Internet does not describe us any more, well no more than “some computer thing”. We have failed to really explain to people what MaidSafe is and what the SAFE network will be able to achieve. The interesting thing I see here is that decentralised Internet mantra used by projects that have pivoted. This shows how many projects stick this description on their products/goals even when those products change quite dramatically. Therefore it is essential we do not use an overused and frankly abused phrase to describe in a few words what we do.

Ok then what is SAFE?

pexels-photo-210158 (1)

This is the critical point, SAFE at it’s core is an “Autonomous Network“, not a set of federated servers, or owned storage locations, or identifiable nodes, but actually an autonomous network. This means no human intervention, no humans setting prices, no altering configurations to make things work, no tweaking data on disk, no altering rules of the nodes, absolutely no human intervention apart from running a piece of self configuring, self healing software. The network decides prices, rewards and how to protect data,  communications and calculations, not any human!

By the very definition of an autonomous network it must be secured against all known threats, otherwise it would not live long at all and be rendered useless almost immediately. Such threats could be collusion based Sybil attacks (people “buy” other peoples vaults etc.) as well as many DOS attacks (so no leader based consensus) have to be defended against, an example of a couple of difficult problems we had to solve. There are a great many threats and challenges, way too many to mention here. In short this is extremely difficult to achieve and naive implementations that ignore even one of these threats will certainly fail. Many answers are not in current papers, research or literature, Engineers need to just suck that up and get innovating and that’s great.

The network alone must know the user accounts, nobody else should. If you think about this, it’s critical and sounds simple, but some thought experiments will show it’s far from simple, otherwise we have intermediaries and by extension corruption. Unless the user wishes to let folk know then they are truly anonymous. The network masks IP addresses to prevent snooping. The network also balances supply of resources and demand on those resources, via “paying” a human somewhere to run this piece of software.

 

This autonomous network works in simulations and the wider network is being proven by running and measuring testnets and releases. This means nobody owns SAFE, nobody controls SAFE and nobody can manipulate SAFE, groups cannot collude to successfully game or attack SAFE and SAFE does what SAFE is programmed to do, protect our privacy, security and give us freedom to communicate, think privately, share privately, learn and teach others. SAFE is the network and the network is safe.

So Autonomous network, that is a mouthful and will confuse many people, what do we really mean in simple terms? Well it’s the Impossible Network, that thing that should not exist, the flying car, the self driving truck, the moon landing, it’s just the impossible network, until we make it possible. A network that secures data, communications and calculations for us all, without owners, intermediaries or third parties. Its a living thing, if you like, something we switch on and the world uses. People power the network for people and get paid for it, people read, watch movies, publish information and hopefully innovate, collaborate and further society, without borders, control or fear. That’s worth doing and is worth working very hard to achieve, however it’s complex and is very easily misunderstood. It’s also, unfortunately not easy to release in parts, autonomous means no intervention, it requires a whole working foundation. That is no easy feat, but if it was then none of us would find it interesting to work on, I certainly wouldn’t. If you want to terrify Engineers put them on projects like this, if they have integrity they will be terrified, however if they relish a challenge they will be satisfied in all the pressure and apparently impossible issues that absolutely require to be solved.

What are we doing with SAFE right now?

At this time the testnets are rolling out, Alpha 2 is a few weeks away, or may be launched before I finish this post. This is replicating current Internet based applications. This seemingly strange task is quite essential, but in some ways is difficult to do. We are forcing a very intelligent huge smart contract system to behave like a normal Internet. The reasons though are varied, but mainly allows the application developers to start on a familiar path. Then as we dive deeper the realisation that these apps (although amazing and competing globally without infrastructure costs etc.) are actually only the tip of the iceberg of possibilities. A conundrum akin to Tim Berners Lee using the internet to stick stamps on emails. In our case though it is valuable, but it is 100% a stepping stone to a much wider and probably more useful set of services that developers can create for people.

What lies between now and launch/Beta

chain

There has been quite a bit of work on data chains which is basically a design to allow the network to validate data was stored securely and transactions happened on the real live network and was not introduced. This provides important features that are required for networks to restart and the network to  manage network partitions. Nodes can then republish data as individuals, but backed by  digitally signed proofs etc.  Data chains and disjoint sections allow a lot of features including secure message transport and more. It’s available to read about  in the RFC‘s this blog and more on implementation design of part 1 in the dev forum. We have split this task into 2 parts, Data Chains part 1 secures groups of nodes. This all happens at the routing layer and actually creates an autonomous network, although one with no data. This is Alpha 3. There is a simulation framework built that allows us to test the data chains design, here. Node ageing is also a requirement of this network and those who like to know more can read about that here. The above requires disjoint sections in the network and again that is described here for the avid reader, this particular implementation was a surprisingly mammoth task in itself, but has proven successful..

Then we add the data layer again to reintroduce user run vaults (the software people run to create the network and get paid) in Alpha 4. The data element is much more straightforward after data chains part 1 is in place.

After that it’s a lot of testing and the introduction of test safecoin. This is simply a data type on the network, so already secured. The addition is the networks contract, some will call this a “smart contract”, but the difference is the oracle is in the network and secured in the network.  Safecoin is the mechanism of tokenising the provision of resources by the network. The network measures and rewards good behaviour of nodes by allocating safecoin to such nodes that behave and have proven to have handled user requests properly.

That is the last components in the very long journey, not that they will be completed immediately, it’s many (many) months of work and testing at least. We are in a magnificent position right now though a we do not have any unanswered remaining blockers between us and launch, no huge design issues to face, just some implementation decisions. After launch the focus in the backend will be simplification and formalisation of as much of these innovations as possible.

Consequences of a true autonomous network

This is where, understanding the “Autonomous Network” really matters. The non ownership of the data storage and computation capability is vital for a future where we can actually integrate innovations and allow “ideas to have sex” much more instantly and effectively. Autonomous network(s) also allow people and companies to move from “private owned” silo’s that inevitably get hacked.  A good (probably vital for society) outcome is the invalidating of privacy as a product moving profits to providing great value, without taking control from people. This is much more lucrative, less risky and without the intrusion and fear that customers are feeling increasingly for their children these days, never mind the fines for data theft and costs/upheaval of ransom-ware etc..

Examples of how some existing technology may change (or be fixed)

  • Autonomous transport
    • Collaborating and sharing important information like accidents, road, rail or sea issues, weather etc. who owns the servers the data sits on, what if its manipulated?. Autonomous networks removes the ownership problem and allows true provable (non refutable) sharing of data. They also enforce industry wide rules that are set in code that is also tested industry wide. Autonomous networks also remove or at least vastly reduce reduce manipulation of results as well as preventing theft or false reporting of company data and industry test results (I am looking at you pharma).
  • IOT
    • As many mini compute devices appear we need to be able to share data between them, securely and (again) irrefutably. Importantly here we wish the network to spot and remove bad actors. The early network nodes that we have right now are not incentivised to operate for the benefit of us all. Providing these incentives (safecoin) means that the provision of resources to run IOT devices is rewarded, whether the “owner” benefits from the device capability at all times, or not. This alters the market significantly where devices can help others and not only the current owner. With these devices, security is important, but ensuring valid data is essential. An autonomous network can and should ensure good behaviour of nodes via node managers as described in the language of the network.
  • Home automation tools and assistants (Alexa, Google home et’ al)
    • The market and advances in home automation, including voice and video recondition amazes and also terrifies consumers. This is an area where we all know we do not wish any company knowing all that information about us. However with autonomous networks  entities create their own accounts without intermediaries, (see self authentication, a very important and simple requirement of an autonomous network) and these accounts can be used to communicate with other nodes, importantly with zero requirement to share who the node belongs to.
  • Cyber warfare or server hacking
    • Simply removing the “target” in terms of servers is a significant reduction in the attack vector of cyber warfare or industrial spying and data theft. The market for this level of security is extremely large, but autonomous networks just remove the problem and therefor are the solution. Anything else that has been tried until now is purely a cat and mouse game between black hat hackers and companies.
  • Password thefts
    • As with improving server security by removing the server,  autonomous networks like SAFE remove the requirement to store a password at all, not locally or stored on disks the we remove passwords form the network. Therefor no password theft on the network.

Of course some of these can be considered changing the status quo to a point where the above points seem like new technology. However, that would be new in our Engineering eyes, but consumers would see no difference, although they may feel safer, the end product in terms of features already exists or these improvement would be invisible to them. More importantly consumers of today’s network probably expect some of the above to already be the case, sadly though it is not and will not be without autonomous networks in my opinion.

Examples of new technology that will be possible

I have written this section and deleted it, replacing with more ideas several times now. I think it’s for another post, but readers may wish to read this whole post a few times and the linked documents and come to their own conclusions. No doubt much better ideas than I can provide will be found and the world changing products of tomorrow should be discovered this way. The worlds full of inspiring people and with the right tools and removal of infrastructure costs etc. I believe we can as a society make great use of these types of network for a safer future.

I hope this has proven to be a bit of an insight into SAFE and the potential it has. The exciting thing for me is not replacing the Internet, but removing this crazy ownership of data by large corporations. They should be providing value, not taking our privacy. I also am excited by the networks ability to ignore silly laws like weaken encryption or snoop on your customers. When we use logic to create things of beauty like this, then those silly governance issue become nothing to fear, discuss or think about. We all get on with our lives then, privately securely and we gain the freedom to communicate, collaborate and innovate all as one. Then it will get very exciting to watch the progress that will hopefully benefit every person on the planet, in one way or another.

In future posts I will attempt to break down the technical parts in a bit more detail and in addition will investigate more solid use cases, one by one.

Posted in complex systems, MaidSafe, Personal Opinion, Privacy

Data chains: what? why? how?

What?


A chain is an image we all know. It represents strength and reliability. Bitcoin has made use of such imagery in the implementation of a blockchain. A blockchain is a chain of links where each link couples transactions (blocks of transactions) in a reliable and cryptographically secured manner.

chain

Data chains represent, instead, links that couples together data identifiers. A data identifier is not the data. As the blockchain does not hold bitcoins a data chain does not hold data.

This post discusses an overview of data chains, there is also an RFC covering some basic use of DataChains and also a codebase to implement DataChains, as well as Data handling (recognition) and long term storage.

Why?


What use is that then? If these things hold no data then what use are they?

This is a critical question to answer and resist writing off as irrelevant, just yet. If we can be assured a transaction happened in a blockchain, then it allows us to know where a bitcoin should exist as if it were a real thing. It is the same with a data chain, if we can be sure a piece of data should exist and where it should exist. However with data chains, identified data is real (documents, videos, etc). That is, if we know these files should exist and can identify them, then we just need to get the data and validate it.With a network that both stores and validates data and their IDs, we gain a lot in efficiency and simplicity as compared to blockchains which cannot store significant amounts of data such as files. Data chains would additionally allow cross network/blockchain patterns, but one must ask why do that, duplication is not efficient?

Data handling 

Therefore if a block identifies indisputable information about a file, such as (naively) a hash of the content, then we can be presented with data and compare to a valid block. If the block says the data is recognised, then it was created by whatever mechanism the network agreed to assume responsibility for the data.

Now we can historically validate that data belongs on the network and it was paid for or otherwise accepted, i.e. it is valid network data. This is a great step, some will think, oh I can do that with XYZ, but hang on for a few minutes.

Network handling

Here, we separate from a standard one truth chain and look at chain parts, or a decentralised chain on a decentralised network. A chain where only a very small number of nodes will know of the validity of some small sets of data, but the network as a whole knows everything and all data. Ok, we can see now this is a bit different.

We need now to look again at the chain a little closer, here is another picture to help. We can see here that there seems to be links and blocks. This is an important issue to cement in our thinking.  In a blockchain for instance the link is simple (simple is good, in security the simpler the stronger in many cases).

datachain_diagram

Here though, a link is another agreement block. These links are actually a collection of names and signatures of the group who agreed on the chain at this time. Each link must contain at least a majority of the previous link members. With every change to the group then a new link is created and added to the chain. Please see the consensus overview blog for an overview of these groups and their importance.

Between these links, lie the data blocks, these represent data identifiers (hash, name, type etc.) of real data. Again these are signed by a majority of the previous link (in actual fact they can be slightly out of order as required in an out of order network, or planet 🙂 ).

Now the picture of a DataChain should be building. But how does it split into a decentralised chain?

Splitting up

The easy way here is to consider just the above chain picture, but also remember back to a xor network13706-20c, or plain binary tree, here is one as a reminder.

Remember also we have a notion of groups of nodes. So lets take a group size of 2 as an example.

The process would be:

  • Node 1 starts the network, there is a link with one member.
  • Node 2 starts and gets the chain so far (it’s small with only node1 listed). Node 1 then sends a link to node 2 signed by node 1.
  • node 2 then sends a link to node 1 signed by node 2.

So now the chain has two links, one with node 1 alone and the next with nodes 1 and 2, both signed by each other. This just continues, so no need for me to bore you here,

However, then node 4 joins and assuming all nodes have a purely even distribution (meaning 2 nodes address starts with 0 and the other two nodes address start with 1) the chain splits! two nodes go down the 0 branch and two go down the 1 branch. The chain has split, but maintains the same “genesis” block. The link with node 1 only. Between these links the data blocks are inserted as data is added (Put), edited (POST) or Deleted from the network, each data block again is signed by the majority of the previous link block (to be valid).

So as more nodes join then this process continues, with the chain splitting as the network grows (for detail see RFC and code). This allows some very powerful features, which we will not dive too deeply in but to name a few as example:

  • Nodes can be shown to exist on the network (i.e. not fake nodes).
  • Nodes can prove group membership.
  • Nodes can have a provable history on the network.
  • Nodes can restart and republish data securely that the network can agree should exist.
  • Network can react to massive churn and still recover.
  • Network can recover from complete outage.
  • Network can handle Open ledgers for publicly verifiable data.
  • Network can individually remember a transaction (so for instance a single safecoin transaction can be remembered as a receipt, at a cost though).
  • Network can handle versioning of any data type (each version is payed for though).
  • Fake nodes cannot forge themselves onto a chain without actually taking over a group (meaning they cannot be fake nodes 🙂 ).
  • As data blocks are held between verifiable links then data validity is guaranteed.
  • As data is guaranteed some nodes can hold only identifiers and not the data, or a small subset of data.
  • As not all nodes are required to store data, churn events will produce significantly less traffic as not all nodes need all data all the time.
  • The network can handle nodes of varying capabilities and use the strongest node in a group to handle the action it is strongest with (i.e. nodes with large storage, store lots, nodes with high bandwidth transfer a lot etc.).
  • Archive nodes become a simple state of a capable node that will be rewarded well in safecoin, but have to fight for its place in the group, too many reboots or missed calls and others will take its place.
  • Nodes can be measured and ranked easily by the network (important security capability)

There is a lot more this pattern allows, an awful lot more, such as massive growth or an immediate contraction in the number of nodes. It is doubtful the full capability of such a system can be quantified easily and certainly not in a single blog post like this, but now it’s time to imagine what we can do?

How? 

Moving forward we need many things to happen, these include, but are not limited to:

  1. Design in detail must be complete for phase 1 (data security and persistence)
  2. Open debates, presentations and discussion must take place.
  3. Code must be written.
  4. Code must be tested.
  5. Integration into existing system.
  6. End to End testing
  7. Move to public use.

Of these points 1, 2, 3 & 4 are ongoing right now. Point 5 requires changes to the existing SAFE routing table starting with an RFC. Point 4 will be enhanced as point 2 gets more attention and input. Point 6 is covered by community testing and point 7 is the wider community tests (i.e. Alpha stages).

Conclusion

Data chains would appear to be something that is a natural progression for decentralised systems. They allow data of any type, size or format to be looked after and maintained in a secure and decentralised manner. Not only the physical data itself (very important), but the validity of such data on the network.

Working on the data chains component has been pretty tough with all the other commitments a typical bleeding edge project has to face, but it looks like it has been very much worth the effort and determination.  The pattern is so simple (remember simple is good) and when I tried to tweak it for different problems I thought may happen, the chain fixed itself. So it is a very simple bit of code really, but provides incredible amount of security to a system as well as historical view of a network, republish capability etc. and all with ease. The single biggest mistake in this library/design would be to try any specialisation for any special situation, almost every time, it takes care of itself, at least so far.

This needs a large scale simulation, which we will do in MaidSafe for sure prior to user testing, but seems to provide a significant missing link from many of today’s networks and decentralised efforts. I hope others find this as useful, intriguing and interesting as I do.

We think there will be many posts depicting various aspects of this design pattern over the next while as we get the ability to explain what it offers in many areas. (i.e. when we understand it well enough to explain it easily)

 

 

Posted in Uncategorized

Introduction & Technical Overview of SAFE Consensus

The features included in decentralized networks can be quite varied based on the proposed goals of the technology. From the sharing capabilities offered by Bittorrent to user privacy enabled via To…

Source: Introduction & Technical Overview of SAFE Consensus

Posted in Uncategorized

Member of The Internet Defense League

Categories
Follow Metaquestions on WordPress.com

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,678 other subscribers