Akio Nishimura
 Tokyo, Japan 
BlaBlaNetBlaBlaNetFri, 08 Jul 2016 17:45:36 +0900 に次の 投稿 をしました
Hubzilla Redundancy Data Possible Solution
I am thinking today how to make Hubzilla more faster and easy. One of the issues with Hubzilla is if you have a USER have 1000 friends in 1000 different hubs in the moment your user make a public post your server will contact the 1000 hubs and post the Public Message . Possible you can afford that but if you have 200 users posting in the seam time the numbers of connections become by 200 X 1000 = 200.000 That make some issues in small Hubs like HIGh resources and Apache and MYSQL crash

Possible Solution:
To avoid the need to contact every Hub and to create redundancy data gets stored in the network in a distributed hash table . Every HUB has an ID that is also a hash value. Ok now is basically already in Hubzilla Your Hub goes through it's database and checks if all the data in the database fits it's own ID by calculating the distance between it's own ID and the hash values in the database. The data will be sent to the three HUB that fit best. Since all the Hubs know the other HUbs' IDs and the all calculate the distance the same way, peers know where to expect results for a special query and they don't have to contact every HUB in the network. You Data connections become Now 200 X 3 = 600 and your Post become Distribute to every one in the Network

Basically every HUB distribute you message  and help you in the Load . Possible That can be a solution for Public Messages

I will like to know the opinion of any one here