User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Bitcoin based Blockchain compression algorithm

Fri Jul 01, 2016 11:33 am

Hello,
I am the Vpncoin's dev, nice to meet you.
We invented a blockchain compression algorithm,
It can be reduce about 25% of the disk space and reduce network traffic,
We are happy to share it to bitcoin and it is free,
And the increase of the source code is compatible, will not fork bitcoin,

If you are interested in this, please post here,
I will publish the relevant source code.
Thanks.

By the way,
The core compression algorithm is 7zip and lz4,
Our compression code has been applied on the Vpncoin, and run stable.
If someone want to use these code,
Please indicate the author (Vpncoin development team, Bit Lee).
Last edited by BitLee on Sat Jul 02, 2016 3:27 am, edited 5 times in total.

User avatar
rogerver
Founder
Founder
Posts: 1850
Joined: Thu Sep 10, 2015 6:55 am

Donate BTC of your choice to 1PpmSbUghyhgbzsDevqv1cxxx8cB2kZCdP

Contact: Website Twitter

Re: Blockchain compression algorithm

Fri Jul 01, 2016 11:44 am

Please publish the relevant source code.
Help spread Bitcoin by linking to everything mentioned here:
topic7039.html

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Blockchain compression algorithm

Fri Jul 01, 2016 1:53 pm

I am sorry for being late but something urgent happened.
To cut a long story short, i directly show the source code.

Add code to init.cpp

Code: Select all

int dw_zip_block = 0; int dw_zip_limit_size = 0; int dw_zip_txdb = 0; bool AppInit2() { ... // ********************************************************* Step 2: parameter interactions #ifdef WIN32 dw_zip_block = GetArg("-zipblock", 1); #else /* LZMA source code in the Linux system needs to improve, It can work, but sometimes it will crash. */ dw_zip_block = GetArg("-zipblock", 0); #endif dw_zip_limit_size = GetArg("-ziplimitsize", 64); dw_zip_txdb = GetArg("-ziptxdb", 0); if( dw_zip_block > 1 ){ dw_zip_block = 1; } else if( dw_zip_block == 0 ){ dw_zip_txdb = 0; } ... }
Add code to main.h

Code: Select all

extern int bitnet_pack_block(CBlock* block, string& sRzt); extern bool getCBlockByFilePos(CAutoFile filein, unsigned int nBlockPos, CBlock* block); extern bool getCBlocksTxByFilePos(CAutoFile filein, unsigned int nBlockPos, unsigned int txId, CTransaction& tx); extern int dw_zip_block; class CTransaction { ... bool ReadFromDisk(CDiskTxPos pos, FILE** pfileRet=NULL) { CAutoFile filein = CAutoFile(OpenBlockFile(pos.nFile, 0, pfileRet ? "rb+" : "rb"), SER_DISK, CLIENT_VERSION); if (!filein) return error("CTransaction::ReadFromDisk() : OpenBlockFile failed"); if( dw_zip_block > 0 ) { //if( fDebug ) printf("CTransaction::ReadFromDisk():: pos.nFile [%d], nBlockPos [%d], nTxPos [%d], pfileRet [%d] \n", pos.nFile, pos.nBlockPos, pos.nTxPos, pfileRet); getCBlocksTxByFilePos(filein, pos.nBlockPos, pos.nTxPos, *this); }else{ // Read transaction if (fseek(filein, pos.nTxPos, SEEK_SET) != 0) return error("CTransaction::ReadFromDisk() : fseek failed"); try { filein >> *this; } catch (std::exception &e) { return error("%s() : deserialize or I/O error", __PRETTY_FUNCTION__); }} // Return file pointer if (pfileRet) { if (fseek(filein, pos.nTxPos, SEEK_SET) != 0) return error("CTransaction::ReadFromDisk() : second fseek failed"); *pfileRet = filein.release(); } return true; } ... } class CBlock { ... bool WriteToDisk(unsigned int& nFileRet, unsigned int& nBlockPosRet, bool bForceWrite = false) { // Open history file to append CAutoFile fileout = CAutoFile(AppendBlockFile(nFileRet), SER_DISK, CLIENT_VERSION); if (!fileout) return error("CBlock::WriteToDisk() : AppendBlockFile failed"); // Write index header unsigned int nSize = fileout.GetSerializeSize(*this); int nSize2 = nSize; string sRzt; if( dw_zip_block > 0 ) { // compression blcok +++ nSize = bitnet_pack_block(this, sRzt); // nSize include 4 byte( block Real size ) // compression blcok +++ } fileout << FLATDATA(pchMessageStart) << nSize; // Write block long fileOutPos = ftell(fileout); if (fileOutPos < 0) return error("CBlock::WriteToDisk() : ftell failed"); nBlockPosRet = fileOutPos; if( dw_zip_block == 0 ){ fileout << *this; } else{ //if( fDebug ) printf("main.h Block.WriteToDisk:: nFileRet [%d], nBlockSize [%d], zipBlockSize [%d], nBlockPosRet = [%d] \n", nFileRet, nSize2, nSize, nBlockPosRet); // compression blcok +++ if( nSize > 0 ){ fileout.write(sRzt.c_str(), nSize); } sRzt.resize(0); // compression blcok +++ } // Flush stdio buffers and commit to disk before returning fflush(fileout); if( bForceWrite || (!IsInitialBlockDownload() || (nBestHeight+1) % 500 == 0) ) FileCommit(fileout); return true; } bool ReadFromDisk(unsigned int nFile, unsigned int nBlockPos, bool fReadTransactions=true) { SetNull(); unsigned int iPos = nBlockPos; if( dw_zip_block > 0 ){ iPos = 0; } // Open history file to read CAutoFile filein = CAutoFile(OpenBlockFile(nFile, iPos, "rb"), SER_DISK, CLIENT_VERSION); if (!filein) return error("CBlock::ReadFromDisk() : OpenBlockFile failed"); if (!fReadTransactions) filein.nType |= SER_BLOCKHEADERONLY; // Read block try { if( dw_zip_block > 0 ) { getCBlockByFilePos(filein, nBlockPos, this); }else{ filein >> *this; } } catch (std::exception &e) { return error("%s() : deserialize or I/O error", __PRETTY_FUNCTION__); } // Check the header if (fReadTransactions && IsProofOfWork() && !CheckProofOfWork(GetPoWHash(), nBits)) return error("CBlock::ReadFromDisk() : errors in block header"); return true; } ... }
Last edited by BitLee on Mon Jul 04, 2016 1:13 am, edited 7 times in total.

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Blockchain compression algorithm

Fri Jul 01, 2016 2:11 pm

Some correlation function:

Code: Select all

#include "lz4/lz4.h" #include "lzma/LzmaLib.h" int StreamToBuffer(CDataStream &ds, string& sRzt, int iSaveBufSize) { int bsz = ds.size(); int iRsz = bsz; if( iSaveBufSize > 0 ){ iRsz = iRsz + 4; } sRzt.resize(iRsz); char* ppp = (char*)sRzt.c_str(); if( iSaveBufSize > 0 ){ ppp = ppp + 4; } ds.read(ppp, bsz); if( iSaveBufSize > 0 ){ *(unsigned int *)(ppp - 4) = bsz; } return iRsz; } int CBlockToBuffer(CBlock *pb, string& sRzt) { CDataStream ssBlock(SER_DISK, CLIENT_VERSION); ssBlock << (*pb); /*int bsz = ssBlock.size(); sRzt.resize(bsz); char* ppp = (char*)sRzt.c_str(); ssBlock.read(ppp, bsz);*/ int bsz = StreamToBuffer(ssBlock, sRzt, 0); return bsz; } int writeBufToFile(char* pBuf, int bufLen, string fName) { int rzt = 0; std::ofstream oFs(fName.c_str(), std::ios::out | std::ofstream::binary); if( oFs.is_open() ) { if( pBuf ) oFs.write(pBuf, bufLen); oFs.close(); rzt++; } return rzt; } int lz4_pack_buf(char* pBuf, int bufLen, string& sRzt) { int worstCase = 0; int lenComp = 0; try{ worstCase = LZ4_compressBound( bufLen ); //std::vector<uint8_t> vchCompressed; //vchCompressed.resize(worstCase); sRzt.resize(worstCase + 4); char* pp = (char *)sRzt.c_str(); lenComp = LZ4_compress(pBuf, pp + 4, bufLen); if( lenComp > 0 ){ *(unsigned int *)pp = bufLen; lenComp = lenComp + 4; } } catch (std::exception &e) { printf("lz4_pack_buf err [%s]:: buf len %d, worstCase[%d], lenComp[%d] \n", e.what(), bufLen, worstCase, lenComp); } return lenComp; } int lz4_unpack_buf(const char* pZipBuf, unsigned int zipLen, string& sRzt) { int rzt = 0; unsigned int realSz = *(unsigned int *)pZipBuf; if( fDebug )printf("lz4_unpack_buf:: zipLen [%d], realSz [%d], \n", zipLen, realSz); sRzt.resize(realSz); char* pOutData = (char*)sRzt.c_str(); // -- decompress rzt = LZ4_decompress_safe(pZipBuf + 4, pOutData, zipLen, realSz); if ( rzt != (int) realSz) { if( fDebug )printf("lz4_unpack_buf:: Could not decompress message data. [%d :: %d] \n", rzt, realSz); sRzt.resize(0); } return rzt; } int CBlockFromBuffer(CBlock* block, char* pBuf, int bufLen) { //vector<char> v(bufLen); //memcpy((char*)&v[0], pBuf, bufLen); CDataStream ssBlock(SER_DISK, CLIENT_VERSION); ssBlock.write(pBuf, bufLen); int i = ssBlock.size(); //ssBlock << v; ssBlock >> (*block); return i; } int lz4_pack_block(CBlock* block, string& sRzt) { int rzt = 0; string sbf; int bsz = CBlockToBuffer(block, sbf); if( bsz > 12 ) { char* pBuf = (char*)sbf.c_str(); rzt = lz4_pack_buf(pBuf, bsz, sRzt); //if( lzRzt > 0 ){ rzt = lzRzt; } // + 4; } } sbf.resize(0); return rzt; } int lzma_depack_buf(unsigned char* pLzmaBuf, int bufLen, string& sRzt) { int rzt = 0; unsigned int dstLen = *(unsigned int *)pLzmaBuf; sRzt.resize(dstLen); unsigned char* pOutBuf = (unsigned char*)sRzt.c_str(); unsigned srcLen = bufLen - LZMA_PROPS_SIZE - 4; SRes res = LzmaUncompress(pOutBuf, &dstLen, &pLzmaBuf[LZMA_PROPS_SIZE + 4], &srcLen, &pLzmaBuf[4], LZMA_PROPS_SIZE); if( res == SZ_OK )//assert(res == SZ_OK); { //outBuf.resize(dstLen); // If uncompressed data can be smaller rzt = dstLen; }else sRzt.resize(0); if( fDebug ) printf("lzma_depack_buf:: res [%d], dstLen[%d], rzt = [%d]\n", res, dstLen, rzt); return rzt; } int lzma_pack_buf(unsigned char* pBuf, int bufLen, string& sRzt, int iLevel, unsigned int iDictSize) // (1 << 17) = 131072 = 128K { int res = 0; int rzt = 0; unsigned propsSize = LZMA_PROPS_SIZE; unsigned destLen = bufLen + (bufLen / 3) + 128; try{ sRzt.resize(propsSize + destLen + 4); unsigned char* pOutBuf = (unsigned char*)sRzt.c_str(); res = LzmaCompress(&pOutBuf[LZMA_PROPS_SIZE + 4], &destLen, pBuf, bufLen, &pOutBuf[4], &propsSize, iLevel, iDictSize, -1, -1, -1, -1, -1); // 1 << 14 = 16K, 1 << 16 = 64K //assert(propsSize == LZMA_PROPS_SIZE); //assert(res == SZ_OK); if( (res == SZ_OK) && (propsSize == LZMA_PROPS_SIZE) ) { //outBuf.resize(propsSize + destLen); *(unsigned int *)pOutBuf = bufLen; rzt = propsSize + destLen + 4; }else sRzt.resize(0); } catch (std::exception &e) { printf("lzma_pack_buf err [%s]:: buf len %d, rzt[%d] \n", e.what(), bufLen, rzt); } if( fDebug ) printf("lzma_pack_buf:: res [%d], propsSize[%d], destLen[%d], rzt = [%d]\n", res, propsSize, destLen, rzt); return rzt; } int lzma_pack_block(CBlock* block, string& sRzt, int iLevel, unsigned int iDictSize) // (1 << 17) = 131072 = 128K { int rzt = 0; string sbf; int bsz = CBlockToBuffer(block, sbf); if( bsz > 12 ) { unsigned char* pBuf = (unsigned char*)sbf.c_str(); rzt = lzma_pack_buf(pBuf, bsz, sRzt, iLevel, iDictSize); //if( lzRzt > 0 ){ rzt = lzRzt; } // + 4; } } sbf.resize(0); return rzt; } int bitnet_pack_block(CBlock* block, string& sRzt) { if( dw_zip_block == 1 ) return lzma_pack_block(block, sRzt, 9, uint_256KB); else if( dw_zip_block == 2 ) return lz4_pack_block(block, sRzt); } bool getCBlockByFilePos(CAutoFile filein, unsigned int nBlockPos, CBlock* block) { bool rzt = false; int ips = nBlockPos - 4; // get ziped block size; if (fseek(filein, ips, SEEK_SET) != 0) return error("getCBlockByFilePos:: fseek failed"); filein >> ips; // get ziped block size; if( fDebug )printf("getCBlockByFilePos:: ziped block size [%d] \n", ips); string s; s.resize(ips); char* pZipBuf = (char *)s.c_str(); filein.read(pZipBuf, ips); string sUnpak; int iRealSz; if( dw_zip_block == 1 ) iRealSz = lzma_depack_buf((unsigned char*)pZipBuf, ips, sUnpak); else if( dw_zip_block == 2 ) iRealSz = lz4_unpack_buf(pZipBuf, ips - 4, sUnpak); if( fDebug )printf("getCBlockByFilePos:: ziped block size [%d], iRealSz [%d] \n", ips, iRealSz); if( iRealSz > 0 ) { pZipBuf = (char *)sUnpak.c_str(); rzt = CBlockFromBuffer(block, pZipBuf, iRealSz) > 12; /*if( fDebug ){ if( block->vtx.size() < 10 ) { printf("\n\n getCBlockByFilePos:: block info (%d): \n", rzt); block->print(); }else printf("\n\n getCBlockByFilePos:: block vtx count (%d) is too large \n", block->vtx.size()); }*/ } s.resize(0); sUnpak.resize(0); return rzt; } bool getCBlocksTxByFilePos(CAutoFile filein, unsigned int nBlockPos, unsigned int txId, CTransaction& tx) { bool rzt = false; CBlock block; rzt = getCBlockByFilePos(filein, nBlockPos, &block); if( rzt ) { if( block.vtx.size() > txId ) { tx = block.vtx[txId]; if( fDebug ){ printf("\n\n getCBlocksTxByFilePos:: tx info: \n"); tx.print(); } }else rzt = false; } return rzt; }
Last edited by BitLee on Sat Jul 02, 2016 3:24 am, edited 2 times in total.

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Blockchain compression algorithm

Fri Jul 01, 2016 2:16 pm

Above is the key code, but not full.
This forum not support c++ source code? display effect is not friendly :)

User avatar
Fremont
Site Admin
Site Admin
Posts: 475
Joined: Tue Nov 17, 2015 11:52 am

Donate BTC of your choice to 18tQ7D9RufgEZ9dSGLytm4Sgk8g2M5NzNZ

Contact: Website

Re: Blockchain compression algorithm

Fri Jul 01, 2016 2:47 pm

Above is the key code, but not full.
This forum not support c++ source code? display effect is not friendly :)
Hi BitLee,

I'm not sure if this will help, but you can place code in code tags, like this: [example]CODE HERE[/example]

Code: Select all

int dw_zip_block = 0; int dw_zip_limit_size = 0; int dw_zip_txdb = 0; bool AppInit2() { ... // ********************************************************* Step 2: parameter interactions #ifdef WIN32 dw_zip_block = GetArg("-zipblock", 1); #else dw_zip_block = GetArg("-zipblock", 0); #endif dw_zip_limit_size = GetArg("-ziplimitsize", 64); dw_zip_txdb = GetArg("-ziptxdb", 0); if( dw_zip_block > 1 ){ dw_zip_block = 1; } else if( dw_zip_block == 0 ){ dw_zip_txdb = 0; } ... } Add code to main.h extern int bitnet_pack_block(CBlock* block, string& sRzt); extern bool getCBlockByFilePos(CAutoFile filein, unsigned int nBlockPos, CBlock* block); extern bool getCBlocksTxByFilePos(CAutoFile filein, unsigned int nBlockPos, unsigned int txId, CTransaction& tx); extern int dw_zip_block; class CTransaction { ... bool ReadFromDisk(CDiskTxPos pos, FILE** pfileRet=NULL) { CAutoFile filein = CAutoFile(OpenBlockFile(pos.nFile, 0, pfileRet ? "rb+" : "rb"), SER_DISK, CLIENT_VERSION); if (!filein) return error("CTransaction::ReadFromDisk() : OpenBlockFile failed"); if( dw_zip_block > 0 ) { //if( fDebug ) printf("CTransaction::ReadFromDisk():: pos.nFile [%d], nBlockPos [%d], nTxPos [%d], pfileRet [%d] \n", pos.nFile, pos.nBlockPos, pos.nTxPos, pfileRet); getCBlocksTxByFilePos(filein, pos.nBlockPos, pos.nTxPos, *this); }else{ // Read transaction if (fseek(filein, pos.nTxPos, SEEK_SET) != 0) return error("CTransaction::ReadFromDisk() : fseek failed"); try { filein >> *this; } catch (std::exception &e) { return error("%s() : deserialize or I/O error", __PRETTY_FUNCTION__); }} // Return file pointer if (pfileRet) { if (fseek(filein, pos.nTxPos, SEEK_SET) != 0) return error("CTransaction::ReadFromDisk() : second fseek failed"); *pfileRet = filein.release(); } return true; } ... } class CBlock { ... bool WriteToDisk(unsigned int& nFileRet, unsigned int& nBlockPosRet, bool bForceWrite = false) { // Open history file to append CAutoFile fileout = CAutoFile(AppendBlockFile(nFileRet), SER_DISK, CLIENT_VERSION); if (!fileout) return error("CBlock::WriteToDisk() : AppendBlockFile failed"); // Write index header unsigned int nSize = fileout.GetSerializeSize(*this); int nSize2 = nSize; string sRzt; if( dw_zip_block > 0 ) { // compression blcok +++ nSize = bitnet_pack_block(this, sRzt); // nSize include 4 byte( block Real size ) // compression blcok +++ } fileout << FLATDATA(pchMessageStart) << nSize; // Write block long fileOutPos = ftell(fileout); if (fileOutPos < 0) return error("CBlock::WriteToDisk() : ftell failed"); nBlockPosRet = fileOutPos; if( dw_zip_block == 0 ){ fileout << *this; } else{ //if( fDebug ) printf("main.h Block.WriteToDisk:: nFileRet [%d], nBlockSize [%d], zipBlockSize [%d], nBlockPosRet = [%d] \n", nFileRet, nSize2, nSize, nBlockPosRet); // compression blcok +++ if( nSize > 0 ){ fileout.write(sRzt.c_str(), nSize); } sRzt.resize(0); // compression blcok +++ } // Flush stdio buffers and commit to disk before returning fflush(fileout); if( bForceWrite || (!IsInitialBlockDownload() || (nBestHeight+1) % 500 == 0) ) FileCommit(fileout); return true; } bool ReadFromDisk(unsigned int nFile, unsigned int nBlockPos, bool fReadTransactions=true) { SetNull(); unsigned int iPos = nBlockPos; if( dw_zip_block > 0 ){ iPos = 0; } // Open history file to read CAutoFile filein = CAutoFile(OpenBlockFile(nFile, iPos, "rb"), SER_DISK, CLIENT_VERSION); if (!filein) return error("CBlock::ReadFromDisk() : OpenBlockFile failed"); if (!fReadTransactions) filein.nType |= SER_BLOCKHEADERONLY; // Read block try { if( dw_zip_block > 0 ) { getCBlockByFilePos(filein, nBlockPos, this); }else{ filein >> *this; } } catch (std::exception &e) { return error("%s() : deserialize or I/O error", __PRETTY_FUNCTION__); } // Check the header if (fReadTransactions && IsProofOfWork() && !CheckProofOfWork(GetPoWHash(), nBits)) return error("CBlock::ReadFromDisk() : errors in block header"); return true; } ... }

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Blockchain compression algorithm

Fri Jul 01, 2016 3:09 pm

Hi BitLee,

I'm not sure if this will help, but you can place code in code tags, like this: [example]CODE HERE[/example]

Code: Select all

int dw_zip_block = 0; int dw_zip_limit_size = 0; int dw_zip_txdb = 0; bool AppInit2() { ... // ********************************************************* Step 2: parameter interactions #ifdef WIN32 dw_zip_block = GetArg("-zipblock", 1); #else dw_zip_block = GetArg("-zipblock", 0); #endif dw_zip_limit_size = GetArg("-ziplimitsize", 64); dw_zip_txdb = GetArg("-ziptxdb", 0); if( dw_zip_block > 1 ){ dw_zip_block = 1; } else if( dw_zip_block == 0 ){ dw_zip_txdb = 0; } ... } Add code to main.h extern int bitnet_pack_block(CBlock* block, string& sRzt); extern bool getCBlockByFilePos(CAutoFile filein, unsigned int nBlockPos, CBlock* block); extern bool getCBlocksTxByFilePos(CAutoFile filein, unsigned int nBlockPos, unsigned int txId, CTransaction& tx); extern int dw_zip_block; class CTransaction { ... bool ReadFromDisk(CDiskTxPos pos, FILE** pfileRet=NULL) { CAutoFile filein = CAutoFile(OpenBlockFile(pos.nFile, 0, pfileRet ? "rb+" : "rb"), SER_DISK, CLIENT_VERSION); if (!filein) return error("CTransaction::ReadFromDisk() : OpenBlockFile failed"); if( dw_zip_block > 0 ) { //if( fDebug ) printf("CTransaction::ReadFromDisk():: pos.nFile [%d], nBlockPos [%d], nTxPos [%d], pfileRet [%d] \n", pos.nFile, pos.nBlockPos, pos.nTxPos, pfileRet); getCBlocksTxByFilePos(filein, pos.nBlockPos, pos.nTxPos, *this); }else{ // Read transaction if (fseek(filein, pos.nTxPos, SEEK_SET) != 0) return error("CTransaction::ReadFromDisk() : fseek failed"); try { filein >> *this; } catch (std::exception &e) { return error("%s() : deserialize or I/O error", __PRETTY_FUNCTION__); }} // Return file pointer if (pfileRet) { if (fseek(filein, pos.nTxPos, SEEK_SET) != 0) return error("CTransaction::ReadFromDisk() : second fseek failed"); *pfileRet = filein.release(); } return true; } ... } class CBlock { ... bool WriteToDisk(unsigned int& nFileRet, unsigned int& nBlockPosRet, bool bForceWrite = false) { // Open history file to append CAutoFile fileout = CAutoFile(AppendBlockFile(nFileRet), SER_DISK, CLIENT_VERSION); if (!fileout) return error("CBlock::WriteToDisk() : AppendBlockFile failed"); // Write index header unsigned int nSize = fileout.GetSerializeSize(*this); int nSize2 = nSize; string sRzt; if( dw_zip_block > 0 ) { // compression blcok +++ nSize = bitnet_pack_block(this, sRzt); // nSize include 4 byte( block Real size ) // compression blcok +++ } fileout << FLATDATA(pchMessageStart) << nSize; // Write block long fileOutPos = ftell(fileout); if (fileOutPos < 0) return error("CBlock::WriteToDisk() : ftell failed"); nBlockPosRet = fileOutPos; if( dw_zip_block == 0 ){ fileout << *this; } else{ //if( fDebug ) printf("main.h Block.WriteToDisk:: nFileRet [%d], nBlockSize [%d], zipBlockSize [%d], nBlockPosRet = [%d] \n", nFileRet, nSize2, nSize, nBlockPosRet); // compression blcok +++ if( nSize > 0 ){ fileout.write(sRzt.c_str(), nSize); } sRzt.resize(0); // compression blcok +++ } // Flush stdio buffers and commit to disk before returning fflush(fileout); if( bForceWrite || (!IsInitialBlockDownload() || (nBestHeight+1) % 500 == 0) ) FileCommit(fileout); return true; } bool ReadFromDisk(unsigned int nFile, unsigned int nBlockPos, bool fReadTransactions=true) { SetNull(); unsigned int iPos = nBlockPos; if( dw_zip_block > 0 ){ iPos = 0; } // Open history file to read CAutoFile filein = CAutoFile(OpenBlockFile(nFile, iPos, "rb"), SER_DISK, CLIENT_VERSION); if (!filein) return error("CBlock::ReadFromDisk() : OpenBlockFile failed"); if (!fReadTransactions) filein.nType |= SER_BLOCKHEADERONLY; // Read block try { if( dw_zip_block > 0 ) { getCBlockByFilePos(filein, nBlockPos, this); }else{ filein >> *this; } } catch (std::exception &e) { return error("%s() : deserialize or I/O error", __PRETTY_FUNCTION__); } // Check the header if (fReadTransactions && IsProofOfWork() && !CheckProofOfWork(GetPoWHash(), nBits)) return error("CBlock::ReadFromDisk() : errors in block header"); return true; } ... }
Thanks :)
Last edited by BitLee on Fri Jul 01, 2016 3:30 pm, edited 1 time in total.

User avatar
Fremont
Site Admin
Site Admin
Posts: 475
Joined: Tue Nov 17, 2015 11:52 am

Donate BTC of your choice to 18tQ7D9RufgEZ9dSGLytm4Sgk8g2M5NzNZ

Contact: Website

Re: Blockchain compression algorithm

Fri Jul 01, 2016 3:24 pm

Thanks :)
You're very welcome! :)

It should look like this:

Code: Select all

[code]Input code here
[/code]

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Blockchain compression algorithm

Fri Jul 01, 2016 3:34 pm

You're very welcome! :)

It should look like this:

Code: Select all

[code]Input code here
[/code]
I see, thanks again.

User avatar
TomZ
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 111
Joined: Thu Oct 29, 2015 5:28 pm
Contact: Website Twitter

Re: Bitcoin based Blockchain compression algorithm

Sat Jul 02, 2016 8:08 pm

Can you give some numbers on the compression you got?

Original number of bytes for a block and compressed number of bytes for a block. And that for 10 blocks or so.

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Bitcoin based Blockchain compression algorithm

Sun Jul 03, 2016 12:55 am

Can you give some numbers on the compression you got?

Original number of bytes for a block and compressed number of bytes for a block. And that for 10 blocks or so.
The compression algorithm has been used in Vpncoin with LZMA(7zip),
In the windows environment, It can save 20%~25% and even more disk space and network traffic,
As you know, the more content, the higher the compression rate,
The bigger the block, the higher the compression rate,
I think there's a higher compression rate on bitcoin,
Because bitcoin's block is large.

User avatar
TomZ
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 111
Joined: Thu Oct 29, 2015 5:28 pm
Contact: Website Twitter

Re: Bitcoin based Blockchain compression algorithm

Sun Jul 03, 2016 12:32 pm

Can you give some numbers on the compression you got?

Original number of bytes for a block and compressed number of bytes for a block. And that for 10 blocks or so.
The compression algorithm has been used in Vpncoin with LZMA(7zip),
In the windows environment, It can save 20%~25% and even more disk space and network traffic,
As you know, the more content, the higher the compression rate,
The bigger the block, the higher the compression rate,
I think there's a higher compression rate on bitcoin,
Because bitcoin's block is large.
Where does that 20%-25% come from? Can you post some actual results you gained on blocks?

I doubt you reached 20% compression using a standard compression algorithm. Please show us your numbers.

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Bitcoin based Blockchain compression algorithm

Sun Jul 03, 2016 2:31 pm

Can you give some numbers on the compression you got?

Original number of bytes for a block and compressed number of bytes for a block. And that for 10 blocks or so.
The compression algorithm has been used in Vpncoin with LZMA(7zip),
In the windows environment, It can save 20%~25% and even more disk space and network traffic,
As you know, the more content, the higher the compression rate,
The bigger the block, the higher the compression rate,
I think there's a higher compression rate on bitcoin,
Because bitcoin's block is large.
Where does that 20%-25% come from? Can you post some actual results you gained on blocks?
I doubt you reached 20% compression using a standard compression algorithm. Please show us your numbers.
My test data come from Vpncoin, these code run stable in the Vpncoin.
In the Vpncoin, we use the LZMA (7Zip) algorithm, maximum compression ratio,
And this algorithm is not only saves disk space, It can also save the same network traffic.
If you don't believe it, you can test it by yourself. :D

User avatar
TomZ
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 111
Joined: Thu Oct 29, 2015 5:28 pm
Contact: Website Twitter

Re: Bitcoin based Blockchain compression algorithm

Sun Jul 03, 2016 10:27 pm


Where does that 20%-25% come from? Can you post some actual results you gained on blocks?
I doubt you reached 20% compression using a standard compression algorithm. Please show us your numbers.
My test data come from Vpncoin, these code run stable in the Vpncoin.
In the Vpncoin, we use the LZMA (7Zip) algorithm, maximum compression ratio,
And this algorithm is not only saves disk space, It can also save the same network traffic.
If you don't believe it, you can test it by yourself. :D

We tried using compression algorithms on Bitcoin. Turns out that most of the data is things like addresses. Which are as close to random as you can get. Which means they don't compress because there are no patterns in them.

A 20% - 25% compression is exceptional, and needs exceptional evidence.

Maybe your test setup reused addresses a lot.. I don't know.

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Bitcoin based Blockchain compression algorithm

Mon Jul 04, 2016 12:28 am


Where does that 20%-25% come from? Can you post some actual results you gained on blocks?
I doubt you reached 20% compression using a standard compression algorithm. Please show us your numbers.
My test data come from Vpncoin, these code run stable in the Vpncoin.
In the Vpncoin, we use the LZMA (7Zip) algorithm, maximum compression ratio,
And this algorithm is not only saves disk space, It can also save the same network traffic.
If you don't believe it, you can test it by yourself. :D

We tried using compression algorithms on Bitcoin. Turns out that most of the data is things like addresses. Which are as close to random as you can get. Which means they don't compress because there are no patterns in them.
A 20% - 25% compression is exceptional, and needs exceptional evidence.
Maybe your test setup reused addresses a lot.. I don't know.

Have you read my source code?

You can test it like this:
Pick a few blocks and export them as a single file (For example, block001.dat, block002.dat, block003.dat...),
And then use the 7zip app to compress each of these file (with maximum compression mode),
In this way, you will see the compression effect.

The best way is to execute these code in bitcoin, test it.
Last edited by BitLee on Mon Jul 04, 2016 1:00 am, edited 2 times in total.

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Bitcoin based Blockchain compression algorithm

Mon Jul 04, 2016 12:46 am

If the bitcoin development team need it,
I will provide the compression algorithm complete source code within bitcoin version 0.8.6,
And then everyone can compile and test it.

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Bitcoin based Blockchain compression algorithm

Tue Jul 05, 2016 6:53 pm

Today, I ported the compression algorithm code to bitcoin version 0.8.6,
The compression effect is obvious.

In order to view the compression effect, i changed the functions inside main.cpp.
Each block file (blkxxxxx.dat) contains 10,000 blocks.

Code: Select all

FindBlockPos bool (&state CValidationState, &pos CDiskBlockPos, int nAddSize unsigned, int nHeight unsigned, nTime Uint64, fKnown bool = false) { ... /* while (infoLastBlockFile.nSize + nAddSize >= MAX_BLOCKFILE_SIZE) { */ if( ((nHeight / 10000) > 0) && ((nHeight % 10000) == 0) ) { printf("nHeight = [%d], Leaving block file %i: %s\n", nHeight, nLastBlockFile, infoLastBlockFile.ToString().c_str()); FlushBlockFile(true); nLastBlockFile++; infoLastBlockFile.SetNull(); pblocktree->ReadBlockFileInfo(nLastBlockFile, infoLastBlockFile); // check whether data for the new file somehow already exist; can fail just fine fUpdatedLast = true; } ... }
Last edited by BitLee on Wed Jul 06, 2016 1:22 pm, edited 1 time in total.

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Bitcoin based Blockchain compression algorithm

Wed Jul 06, 2016 4:14 am

Original Bitcoin Genesis Block Hex Code:
Image

Compression Bitcoin Genesis Block Hex Code:
Image

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Bitcoin based Blockchain compression algorithm

Wed Jul 06, 2016 1:21 pm

Today, I ported the compression algorithm code to bitcoin version 0.8.6,
The compression effect is obvious.

In order to view the compression effect, i changed the functions inside main.cpp.
Each block file (blkxxxxx.dat) contains 10,000 blocks.

Code: Select all

FindBlockPos bool (&state CValidationState, &pos CDiskBlockPos, int nAddSize unsigned, int nHeight unsigned, nTime Uint64, fKnown bool = false) { ... /* while (infoLastBlockFile.nSize + nAddSize >= MAX_BLOCKFILE_SIZE) { */ if( ((nHeight / 10000) > 0) && ((nHeight % 10000) == 0) ) { printf("nHeight = [%d], Leaving block file %i: %s\n", nHeight, nLastBlockFile, infoLastBlockFile.ToString().c_str()); FlushBlockFile(true); nLastBlockFile++; infoLastBlockFile.SetNull(); pblocktree->ReadBlockFileInfo(nLastBlockFile, infoLastBlockFile); // check whether data for the new file somehow already exist; can fail just fine fUpdatedLast = true; } ... }
@rogerver @TomZ
Here is the test data:

blk00000.dat (include 0 ~ 9999 blocks), Original size is 2,318,345 bytes, After compression is 2,116,328 bytes, compression ratio is 8.7%,
blk00001.dat (include 10000 ~ 19999 blocks), Original size is 2,303,141 bytes, After compression is 2,103,239 bytes, compression ratio is 8.6%,
blk00002.dat (include 20000 ~ 29999 blocks), Original size is 2,440,262 bytes, After compression is 2,224,608 bytes, compression ratio is 8.8%,
blk00003.dat (include 30000 ~ 39999 blocks), Original size is 2,500,372 bytes, After compression is 2,278,627 bytes, compression ratio is 8.86%,
blk00004.dat (include 40000 ~ 49999 blocks), Original size is 2,775,946 bytes, After compression is 2,527,266 bytes, compression ratio is 8.95%,
blk00005.dat (include 50000 ~ 59999 blocks), Original size is 4,611,316 bytes, After compression is 3,927,464 bytes, compression ratio is 14.8%,
blk00006.dat (include 60000 ~ 69999 blocks), Original size is 6,788,315 bytes, After compression is 5,763,507 bytes, compression ratio is 15%,
blk00007.dat (include 70000 ~ 79999 blocks), Original size is 8,111,206 bytes, After compression is 6,493,703 bytes, compression ratio is 19.9%,
blk00008.dat (include 80000 ~ 89999 blocks), Original size is 7,963,189 bytes, After compression is 7,048,131 bytes, compression ratio is 11.49%,
blk00009.dat (include 90000 ~ 99999 blocks), Original size is 20,742,813 bytes, After compression is 13,708,206 bytes, compression ratio is 33.9%,
blk00010.dat (include 100000 ~ 109999 blocks), Original size is 23,122,509 bytes, After compression is 19,481,570 bytes, compression ratio is 15.7%,
blk00011.dat (include 110000 ~ 119999 blocks), Original size is 50,681,392 bytes, After compression is 40,918,962 bytes, compression ratio is 19.2%,
blk00012.dat (include 120000 ~ 129999 blocks), Original size is 107,469,564 bytes, After compression is 88,319,322 bytes, compression ratio is 17.8%,
blk00013.dat (include 130000 ~ 139999 blocks), Original size is 231,631,119 bytes, After compression is 188,562,481 bytes, compression ratio is 18.59%,
blk00014.dat (include 140000 ~ 149999 blocks), Original size is 215,720,950 bytes, After compression is 174,676,348 bytes, compression ratio is 19%,
blk00015.dat (include 150000 ~ 159999 blocks), Original size is 173,452,632 bytes, After compression is 139,074,101 bytes, compression ratio is 19.8%,
blk00016.dat (include 160000 ~ 169999 blocks), Original size is 212,377,235 bytes, After compression is 164,287,461 bytes, compression ratio is 22.6%,
blk00017.dat (include 170000 ~ 179999 blocks), Original size is 263,652,393 bytes, After compression is 205,578,322 bytes, compression ratio is 22%,
blk00018.dat (include 180000 ~ 189999 blocks), Original size is 887,112,287 bytes, After compression is 612,296,114 bytes, compression ratio is 30.9%,
blk00019.dat (include 190000 ~ 199999 blocks), Original size is 925,036,513 bytes, After compression is 638,670,092 bytes, compression ratio is 30.9%,

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Bitcoin based Blockchain compression algorithm

Fri Jul 08, 2016 11:09 am

This forum is so quiet? :ugeek: :geek:

User avatar
rogerver
Founder
Founder
Posts: 1850
Joined: Thu Sep 10, 2015 6:55 am

Donate BTC of your choice to 1PpmSbUghyhgbzsDevqv1cxxx8cB2kZCdP

Contact: Website Twitter

Re: Bitcoin based Blockchain compression algorithm

Mon Jul 11, 2016 6:04 pm

This forum is so quiet? :ugeek: :geek:
Please invite your friends!
Help spread Bitcoin by linking to everything mentioned here:
topic7039.html

User avatar
BitLee
Nickel Bitcoiner
Nickel Bitcoiner
Posts: 28
Joined: Fri Jul 01, 2016 11:27 am

Re: Bitcoin based Blockchain compression algorithm

Fri Jul 22, 2016 5:54 pm

Bitcoin version 0.8.6 integrated blockchain compression algorithm source code has been upload:
https://github.com/Bit-Net/Bitcoin-0.8.6

Image

elnaznazari
Posts: 1
Joined: Wed Jan 30, 2019 6:22 am

Re: Bitcoin based Blockchain compression algorithm

Wed Jan 30, 2019 6:25 am

This is Elnaz Nazari; I am new to Blockchain and English is my second language, I would appreciate if you introduce a couple of good main references that would help me to better understand the issue. While I have already read some papers but a couple of questions are keeping my mind busy:
The Blockchain may be used for Jpeg images to advance the security and so it does not effect the compression; is this true?
Can the Blockchain algorithm be used for Image Compression? If so, are there any main references or articles about it to start from?
Or in general were there any objectives in using the blockchain other than "security"? If so what are they?
For your information I am studying master degree in biomedical engineering and I am also interested in the blockchain technology; I would like to somehow use this technology in medical systems.

Thanks in advance for you help and time

Return to “Development & Technical Discussion”

Who is online

Users browsing this forum: No registered users and 1 guest