Friday, July 27, 2018

What is a Hash Function?

A cryptographic hash function is a hash function which takes an input (or ‘message’) and returns a fixed-size alphanumeric string. The string is called the ‘hash value’, ‘message digest’, ‘digital fingerprint’, ‘digest’ or ‘checksum’.

Above diagram explains hashing, we have our text “abc123” and after applying a hash function(SHA-1) we get fixed-size alphanumeric output which we called as a hash value. By using this hash value we cannot get back our original input text.

Fundamentals of Hashing
Hash functions are one-way: we cannot reverse a hash value to find the original content. (irreversible)
If we pass the same content through the same hash function then it should produce the same output/same hash value.
Imagine a scenario of storing passwords in software systems…

If we store passwords in plain text anyone who has access to the database can view all passwords and even can log in to the system using someone’s credentials. To overcome this we can use hashing.

Instead of saving plain text password, we can hash the password using hashing function(h1) and store the hash value.

When a user tries to log in to the system, users input password is hashed using the same hash function(h1) and check it with the hash value stored in the table. If both hash values are equal we can allow the user to log in to the system.

In the above table john and sam has the same password “abc123” and after applying hash function both of them get the same hash value. Imagine john has access to the database and he can view the hash password. Then john can notice that his password hash value and sam’s password hash value are the same. So john will be able to login to the system using sam’s credentials. To overcome this we can apply the technique called salting.

Salted Hashing
In salted hashing, our goal is to make the hash value of the password unique, for that system generates a random set of characters called salt. When user enters plain text password, the generated random set of characters will be append to the plain text password. Then we sent the appended text to hashing function and get the hash value(salted hash). In this case, we have to store salt value for each user.

In the above table even though john and sam have the same password, hash value is different.

In the login process system gets salt value for the relevant user from the database and append it with input password and pass it through the hashing function and check the resulting hash value with the stored hash value in the table. If both hash values match, the user is authenticated.

Hash Collision
If two different inputs are having the same hash value, it is called a collision. Since hash functions have infinite input length and a predefined output length, there is a possibility of two different inputs that produce same hash value.

Here is an example that displays different content, yet has the same SHA-1 value. A team of researchers from CWI (Centrum Wiskunde & Informatica) and Google have managed to alter a PDF without changing its SHA-1 hash value.

http://shattered.io/

SHA-512 produces hashes that are longer than those produced by MD5, so it’s harder to find collision opportunities.

See the difference for yourself:

input text : “password”

MD5:
5f4dcc3b5aa765d61d8327deb882cf99
SHA-1:
5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8
SHA-256:  5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8
SHA-512:  b109f3bbbc244eb82441917ed06d618b9008dd09b3befd1b5e07394c706a8bb980b1d7785e5976ec049b46df5f1326af5a2ea6d103fd07c95385ffab0cacbc86

Applications of Hashing
For password storage and authentication.
We have discussed this scenario above.

Integrity Protection.


Bob wants to send a message to Alice, but there is this middle man Darth who can modify the message from Bob to Alice. So, how can Alice verify that she receives the original message sent by Bob and it is not modified by someone during the communication?

What Bob can do is, after writing the message he can calculate the hash value of the message and send it along with the message. When Alice receives the message, she can again calculate the hash value for the message from the same hashing function used by Bob and check it with the hash value she received from Bob. If both hash values are equal Alice can verify that the message is not modified.

SSL Certificate Validation
HTTPS is reflected in the browser’s URL bar to indicate a secure connection while accessing secure websites. In SSL/TLS handshake process when client says hello, server sends its public key along with a certificate that asserts public key belongs to the server. If it’s a website(google.com) the certificate will contain domain name of the website. Basically certificate says something like public key(which sent along with the certificate) belongs to google.com. So, how do you check the validity of this certificate, that’s where hashing comes into play.

Here is the certificate of google.com. You can see the hash value of google.com certificate under Fingerprints section.


What basically happen is, your browser downloads the web server’s certificate, calculate the hash value of certificate itself then compare it with the hash value in the certificate. If both of these hash values are equal certificate is verified.

Thursday, May 10, 2018

Secrets of Existence



Question 1 : What is dark matter?

All the ordinary matter we can find accounts for only about 4 percent of the universe. We know this by calculating how much mass would be needed to hold galaxies together and cause them to move about the way they do when they gather in large clusters. Another way to weigh the unseen matter is to look at how gravity bends the light from distant objects. Every measure tells astronomers that most of the universe is invisible. 

It's tempting to say that the universe must be full of dark clouds of dust or dead stars and be done with it, but there are persuasive arguments that this is not the case. First, although there are ways to spot even the darkest forms of matter, almost every attempt to find missing clouds and stars has failed. Second, and more convincing, cosmologists can make very precise calculations of the nuclear reactions that occurred right after the Big Bang and compare the expected results with the actual composition of the universe. Those calculations show that the total amount of ordinary matter, composed of familiar protons and neutrons, is much less than the total mass of the universe. Whatever the rest is, it isn't like the stuff of which we're made. 

The quest to find the missing universe is one of the key efforts that has brought cosmologists and particle physicists together. The leading dark-matter candidates are neutrinos or two other kinds of particles: neutralinos and axions, predicted by some physics theories but never detected. All three of these particles are thought to be electrically neutral, thus unable to absorb or reflect light, yet stable enough to have survived from the earliest moments after the Big Bang. 


Question 2 : What is dark energy?

Two recent discoveries from cosmology prove that ordinary matter and dark matter are still not enough to explain the structure of the universe. There's a third component out there, and it's not matter but some form of dark energy.

The first line of evidence for this mystery component comes from measurements of the geometry of the universe. Einstein theorized that all matter alters the shape of space and time around it. Therefore, the overall shape of the universe is governed by the total mass and energy within it. Recent studies of radiation left over from the Big Bang show that the universe has the simplest shape—it's flat. That, in turn, reveals the total mass density of the universe. But after adding up all the potential sources of dark matter and ordinary matter, astronomers still come up two-thirds short. 

The second line of evidence suggests that the mystery component must be energy. Observations of distant supernovas show that the rate of expansion of the universe isn't slowing as scientists had once assumed; in fact, the pace of the expansion is increasing. This cosmic acceleration is difficult to explain unless a pervasive repulsive force constantly pushes outward on the fabric of space and time. 

Why dark energy produces a repulsive force field is a bit complicated. Quantum theory says virtual particles can pop into existence for the briefest of moments before returning to nothingness. That means the vacuum of space is not a true void. Rather, space is filled with low-grade energy created when virtual particles and their antimatter partners momentarily pop into and out of existence, leaving behind a very small field called vacuum energy. 

That energy should produce a kind of negative pressure, or repulsion, thereby explaining why the universe's expansion is accelerating. Consider a simple analogy: If you pull back on a sealed plunger in an empty, airtight vessel, you'll create a near vacuum. At first, the plunger will offer little resistance, but the farther you pull, the greater the vacuum and the more the plunger will pull back against you. Although vacuum energy in outer space was pumped into it by the weird rules of quantum mechanics, not by someone pulling on a plunger, this example illustrates how repulsion can be created by a negative pressure. 


Question 3 : How were the heavy elements from iron to uranium made?

Both dark matter and possibly dark energy originate from the earliest days of the universe, when light elements such as helium and lithium arose. Heavier elements formed later inside stars, where nuclear reactions jammed protons and neutrons together to make new atomic nuclei. For instance, four hydrogen nuclei (one proton each) fuse through a series of reactions into a helium nucleus (two protons and two neutrons). That's what happens in our sun, and it produces the energy that warms Earth. 

But when fusion creates elements that are heavier than iron, it requires an excess of neutrons. Therefore, astronomers assume that heavier atoms are minted in supernova explosions, where there is a ready supply of neutrons, although the specifics of how this happens are unknown. More recently, some scientists have speculated that at least some of the heaviest elements, such as gold and lead, are formed in even more powerful blasts that occur when two neutron stars—tiny, burned-out stellar corpses—collide and collapse into a black hole.


Question 4 : Do neutrinos have mass?

Nuclear reactions such as those that create heavy elements also create vast numbers of ghostly subatomic bits known as neutrinos. These belong to a group of particles called leptons, such as the familiar electron and the muon and tau particles. Because neutrinos barely interact with ordinary matter, they can allow a direct look into the heart of a star. This works only if we are able to capture and study them, something physicists are just now learning to do. 

Not long ago, physicists thought neutrinos were massless, but recent advances indicate that these particles may have a small mass. Any such evidence would also help validate theories that seek to find a common description of three of the four natural forces—electromagnetism, strong force, and weak force. Even a tiny bit of heft would add up because a staggering number of neutrinos are left over from the Big Bang. 


Question 5 : Where do ultrahigh-energy particles come from?

The most energetic particles that strike us from space, which include neutrinos as well as gamma-ray photons and various other bits of subatomic shrapnel, are called cosmic rays. They bombard Earth all the time; a few are zipping through you as you read this article. Cosmic rays are sometimes so energetic, they must be born in cosmic accelerators fueled by cataclysms of staggering proportions. Scientists suspect some sources: the Big Bang itself, shock waves from supernovas collapsing into black holes, and matter accelerated as it is sucked into massive black holes at the centers of galaxies. Knowing where these particles originate and how they attain such colossal energies will help us understand how these violent objects operate.


Question 6 : Is a new theory of light and matter needed to explain what happens at very high energies and temperatures?
All of that violence cited in question 5 leaves a visible trail of radiation, especially in the form of gamma rays—the extremely energetic cousins of ordinary light. Astronomers have known for three decades that brilliant flashes of these rays, called gamma-ray bursts, arrive daily from random directions in the sky. Recently astronomers have pinned down the location of the bursts and tentatively identified them as massive supernova explosions and neutron stars colliding both with themselves and black holes. But even now nobody knows much about what goes on when so much energy is flying around. Matter grows so hot that it interacts with radiation in unfamiliar ways, and photons of radiation can crash into each other and create new matter. The distinction between matter and energy grows blurry. Throw in the added factor of magnetism, and physicists can make only rough guesses about what happens in these hellish settings. Perhaps current theories simply aren't adequate to explain them.


Question 7 : Are there new states of matter at ultrahigh temperatures and densities?

Under extreme energetic conditions, matter undergoes a series of transitions, and atoms break down into their smallest constituent parts. Those parts are elementary particles called quarks and leptons, which as far as we know cannot be subdivided into smaller parts. Quarks are extremely sociable and are never observed in nature alone. Rather, they combine with other quarks to form protons and neutrons (three quarks per proton) that further combine with leptons (such as electrons) to form whole atoms. The hydrogen atom, for example, is made up of an electron orbiting a single proton. Atoms, in turn, bind to other atoms to form molecules, such as H2O. As temperatures increase, molecules transform from a solid such as ice, to a liquid such as water, to a gas such as steam. 

That's all predictable, known science, but at temperatures and densities billions of times greater than those on Earth, it's possible that the elementary parts of atoms may come completely unglued from one another, forming a plasma of quarks and the energy that binds quarks together. Physicists are trying to create this state of matter, a quark-gluon plasma, at a particle collider on Long Island. At still higher temperatures and pressures, far beyond those scientists can create in a laboratory, the plasma may transmute into a new form of matter or energy. Such phase transitions may reveal new forces of nature. 

These new forces would be added to the three forces that are already known to regulate the behavior of quarks. The so-called strong force is the primary agent that binds these particles together. The second atomic force, called the weak force, can transform one type of quark into another (there are six different "flavors" of quark—up, down, charm, strange, top, and bottom). The final atomic force, electromagnetism, binds electrically charged particles such as protons and electrons together. As its name implies, the strong force is by far the most muscular of the three, more than 100 times as powerful as electromagnetism and 10,000 times stronger than the weak force. Particle physicists suspect the three forces are different manifestations of a single energy field in much the same way that electricity and magnetism are different facets of an electromagnetic field. In fact, physicists have already shown the underlying unity between electromagnetism and the weak force. 

Some unified field theories suggest that in the ultrahot primordial universe just after the Big Bang, the strong, weak, electromagnetic, and other forces were one, then unraveled as the cosmos expanded and cooled. The possibility that a unification of forces occurred in the newborn universe is a prime reason particle physicists are taking such a keen interest in astronomy and why astronomers are turning to particle physics for clues about how these forces may have played a role in the birth of the universe. For unification of forces to occur, there must be a new class of supermassive particles called gauge bosons. If they exist, they will allow quarks to change into other particles, causing the protons that lie at the heart of every atom to decay. And if physicists prove protons can decay, the finding will verify the existence of new forces. 

That raises the next question.


Question 8 : Are protons unstable?
In case you're worried that the protons you're made of will disintegrate, transforming you into a puddle of elementary particles and free energy, don't sweat it. Various observations and experiments show that protons must be stable for at least a billion trillion trillion years. However, many physicists believe that if the three atomic forces are really just different manifestations of a single unified field, the alchemical, supermassive bosons described above will materialize out of quarks every now and then, causing quarks, and the protons they compose, to degenerate. 

At first glance, you'd be forgiven for thinking these physicists had experienced some sort of mental decay on the grounds that tiny quarks are unlikely to give birth to behemoth bosons weighing more than 10,000,000,000,000,000 times themselves. But there's something called the Heisenberg uncertainty principle, which states that you can never know both the momentum and the position of a particle at the same time, and it indirectly allows for such an outrageous proposition. Therefore, it's possible for a massive boson to pop out of a quark making up a proton for a very short time and cause that proton to decay. 


Question 9 : What is gravity?

Next there's the matter of gravity, the odd force out when it comes to small particles and the energy that holds them together. When Einstein improved on Newton's theory, he extended the concept of gravity by taking into account both extremely large gravitational fields and objects moving at velocities close to the speed of light. These extensions lead to the famous concepts of relativity and space-time. But Einstein's theories do not pay any attention to quantum mechanics, the realm of the extremely small, because gravitational forces are negligible at small scales, and discrete packets of gravity, unlike discrete packets of energy that hold atoms together, have never been experimentally observed. 

Nonetheless, there are extreme conditions in nature in which gravity is compelled to get up close and personal with the small stuff. For example, near the heart of a black hole, where huge amounts of matter are squeezed into quantum spaces, gravitational forces become very powerful at tiny distances. The same must have been true in the dense primordial universe around the time of the Big Bang. 

Physicist Stephen Hawking identified a specific problem about black holes that requires a bridging of quantum mechanics and gravity before we can have a unified theory of anything. According to Hawking, the assertion that nothing, even light, can escape from a black hole is not strictly true. Weak thermal energy does radiate from around black holes. Hawking theorized that this energy is born when particle-antiparticle pairs materialize from the vacuum in the vicinity of a black hole. Before the matter-antimatter particles can recombine and annihilate each other, one that may be slightly closer to the black hole will be sucked in, while the other that is slightly farther away escapes as heat. This release does not connect in any obvious way to the states of matter and energy that were earlier sucked into that black hole and therefore violates a law of quantum physics stipulating that all events must be traceable to previous events. New theories may be needed to explain this problem. 


Question 10 : Are there additional dimensions?

Wondering about the real nature of gravity leads eventually to wondering whether there are more than the four dimensions we can easily observe. To get to that place, we might first wonder if nature is, in fact, schizophrenic: Should we accept that there are two kinds of forces that operate over two different scales—gravity for big scales like galaxies, the other three forces for the tiny world of atoms? Poppycock, say unified theory proponents—there must be a way to connect the three atomic-scale forces with gravity. Maybe, but it won't be easy. In the first place, gravity is odd. Einstein's general theory of relativity says gravity isn't so much a force as it is an inherent property of space and time. Accordingly, Earth orbits the sun not because it is attracted by gravity but because it has been caught in a big dimple in space-time caused by the sun and spins around inside this dimple like a fast-moving marble caught in a large bowl. Second, gravity, as far as we have been able to detect, is a continuous phenomenon, whereas all the other forces of nature come in discrete packets.

All this leads us to the string theorists and their explanation for gravity, which includes other dimensions. The original string-theory model of the universe combines gravity with the other three forces in a complex 11-dimensional world. In that world—our world—seven of the dimensions are wrapped up on themselves in unimaginably small regions that escape our notice. One way to get your mind around these extra dimensions is to visualize a single strand of a spiderweb. To the naked eye, the filament appears to be one dimensional, but at high magnification it resolves into an object with considerable width, breadth, and depth. String theorists argue that we can't see extra dimensions because we lack instruments powerful enough to resolve them. 

We may never see these extra dimensions directly, but we may be able to detect evidence of their existence with the instruments of astronomers and particle physicists. 


Question 11 : How did the universe begin?

If all four forces of nature are really a single force that takes on different complexions at temperatures below several million degrees, then the unimaginably hot and dense universe that existed at the Big Bang must have been a place where distinctions between gravity, strong force, particles, and antiparticles had no meaning. Einstein's theories of matter and space-time, which depend upon more familiar benchmarks, cannot explain what caused the hot primordial pinpoint of the universe to inflate into the universe we see today. We don't even know why the universe is full of matter. According to current physics ideas, energy in the early universe should have produced an equal mix of matter and antimatter, which would later annihilate each other. Some mysterious and very helpful mechanism tipped the scales in favor of matter, leaving enough to produce galaxies full of stars.

Fortunately, the primordial universe left behind a few clues. One is the cosmic microwave background radiation, the afterglow of the Big Bang. For several decades now, that weak radiation measured the same wherever astronomers looked at the edges of the universe. Astronomers believed such uniformity meant that the Big Bang commenced with an inflation of space-time that unfolded faster than the speed of light. 

More recent careful observation, however, shows that the cosmic background radiation is not perfectly uniform. There are minuscule variations from one small patch of space to another that are randomly distributed. Could random quantum fluctuations in the density of the early universe have left this fingerprint? Very possibly, says Michael Turner, chairman of the astrophysics department at the University of Chicago and chairman of the committee that came up with these 11 questions. Turner and many other cosmologists now believe the lumps of the universe—vast stretches of void punctuated by galaxies and galactic clusters—are probably vastly magnified versions of quantum fluctuations of the original, subatomic-size universe. 

And that is just the sort of marriage of the infinite and the infinitesimal that has particle physicists cozying up to astronomers these days, and why all 11 of these mysteries might soon be explained by one idea. 



Real Question : How Did We Get Here?


Astronomers cannot see all the way back in time to the origin of the universe, but by drawing on lots of clues and theory, they can imagine how everything began. 

Their model starts with the entire universe as a very hot dot, much smaller than the diameter of an atom. The dot began to expand faster than the speed of light, an expansion called the Big Bang. Cosmologists are still arguing about the exact mechanism that may have set this event in motion. From there on out, however, they are in remarkable agreement about what happened. As the baby universe expanded, it cooled the various forms of matter and antimatter it contained, such as quarks and leptons, along with their antimatter twins, antiquarks and antileptons. These particles promptly smashed into and annihilated one another, leaving behind a small residue of matter and a lot of energy. The universe continued to cool down until the few quarks that survived could latch together into protons and neutrons, which in turn formed the nuclei of hydrogen, helium, deuterium, and lithium. For 300,000 years, this soup stayed too hot for electrons to bind to the nuclei and form complete atoms. But once temperatures dropped enough, the same hydrogen, helium, deuterium, and lithium atoms that are around today formed, ready to start a long journey into becoming dust, planets, stars, galaxies, and lawyers. 

Gravity—the weakest of the forces but the only one that acts cumulatively across long distances—gradually took control, gathering gas and dust into massive globs that collapsed in on themselves until fusion reactions were ignited and the first stars were born. At much larger scales, gravity pulled together huge regions of denser-than-average gas. These evolved into clusters of galaxies, each one brimming with billions of stars. 

Over the eons fusion reactions inside stars transformed hydrogen and helium into other atomic nuclei, including carbon, the basis for all life on Earth. 

The most massive stars sometimes exploded in energetic supernovas that produced even heavier elements, up to and including iron. Where the heaviest elements, such as uranium and lead, came from still remains something of a mystery. 



Wednesday, May 9, 2018

WebApp RESTful API

I have created an authorization server and resource server both in a single API. There is an endpoint that you can call in order to retrieve the resources for the demonstration purposes.
This is written using node.js. In order to run this on your computer you have to have node.js installed on your computer.

app.js

As you can see oauth grant type I have given is client_credentials. This has to be mentioned in the request body when you try to get the access token from authorization server.
Also this app tuns on port 4000. You can give any port number here.
There are two endpoints I have created in this. One to get the access token which is "/oauth/token" and the other one is to get resources which is "/profile".
As resources I have hard coded one value which is name ("Waas") and this comes as a JSON object.

model.js


Here I have created a sample user. (username = admin, password = admin) and all the functions that handle requests from client are written in this file.

Run the app.js file.


To make all get and post requests to the resource server we use RESTclient Mozilla Firefox Add on. You can use other similar products such as Postman for this.

First of all We have to make a POST request to get the access token from the authorization server.
For that we have to send the authorization key in the header.

Authorization : Bearer XXXXXXXXXXXXXXX
And also we have to mention the content type in the header.

I ll demonstrate with RestClient on Mozilla Firefox with creating all the requests manually and of course how to retrieve resources.


Then we have to mention these 3 parameters in the body.
username=test
password=test
grant_type=client_credentials

The URL should be the endpoint that gives us the access token.

http://localhost:4000/oauth/token 


When we send this we get the response which has access token in it. This access token also have an expiration time.

Then we have to make a GET request to retrieve the resources we need.



Now our URL is different because we have to call a different endpoint to get these resources which is "http://localhost:4000/profile".
We do not have to mention anything in the body.
In the request header we should send the access token we got in the previous step.

Authization: Bearer XXXXXXXXXXXXXXX

Make sure that the access token is not expired. Otherwise you will get an error message saying that it has expired.

When you sent this request you get a response that contains the resources we specified in the code.

Find the Source code from here.

Double Submit Cookies

Cross-site Request Forgery protection in web applications via Double Submit Cookies Patterns.


In the previous blog post I have described about Synchronize Token Pattern Approach which can be applied as a prevention method of Cross Site Request Forgery(CSRF). In this blog post I will share some knowledge on another CSRF prevention technique which is Double Submit Cookie Pattern approach.


According to Wikipedia Double Submit Cookie is defined as sending a random number value in both a cookie and as a request parameter, with the server verifying that the cookie value and request value match. 

Sample Demonstration

As  similar as the previous  blog post i just created a simple login form by hard coding the user credentials.


Upon login, I generated a session identifier and set as a cookie in the browser.At the same time generate the CSRF token for the session and set a cookie in the browser.

After a successful login it will redirect you to another page which consists a form to be filled.

When the form is submitted to the action the CSRF token cookie will be submitted and also in the form body the CSRF token value will be submitted.

In the web page that accepts the form submission (the URL of the action), obtain the CSRF token received in the cookie and also in the message body.Compare the two values received and if they match, show success message. If not show error message.


You can find the source code from here.

Synchronise Tokens

Cross Site Request Forgery.

According to Wikipedia, "Cross Site Request forgery" known as a one-click attack or session riding and abbreviated as CSRF or XSRF ,is a type of malicious exploit of a website whereby unauthorized commands are transmitted from a user that the website trusts".
XSS is a vulnerability that exploits a user's trust he has on his website/server.CSRF exploits the server's trust it has on the user.
CSRF vulnerability makes use of the fact that the website doesn't verify whether the request is coming from a legitimate user or not.Rather , it just checks if the request is coming from browser of an authorized user.

Requirements for a CSRF attack to work .

1. The victm must be authenticated to the server.
2. Attacker has to send a crafted link to the victim.This link is crafted in such a way that it sends a request to the target website.
3.Victim must click/execute the malformed link from his browser,which is already having a session.It sends a request on the victim's behalf and executes a specific task from the current session.

Preventing CSRF vulnerabilities

1. Synchronize Token Patterns approach.
2. Double submit cookies approach.

Synchronize Token Patterns approach will be discussed in this blog post.Double submit cookies approach will be discussed in a future blog post.
  • Any state changing operation requires a secure random token (e.g., CSRF token) to prevent CSRF attacks.
  • CSRF Token value should be
    • Unique per user session.
    •  A random value
    • Generated by a cryptographically secure random number generator.(MD5,sha1)
  • The CSRF token is added as a hidden field for forms or within the URL if the state changing operation occurs via a GET
  • The server should  reject the requested action if the CSRF token fails validation.
I have implemented a small example to demonstrate the Synchronize Token Patterns approach.

I used a hard coded user credentials for the  demonstration purpose.
Upon login, I generated a session identifier and set as a cookie in the browser.
At the same time the CSRF token will be generated and it will save in the server side.
In the website, I have implemented an endpoint that accepts HTTP POST requests and respond with the CSRF token.The endpoint receives the session cookie and

based on the session identifier, return the CSRF token value.
I have Implemented a web page that has a HTML form. The method should be POST and action should be another URL in the website.

When this page loads,  an Ajax call will be executed via a javascript, which invokes the endpoint for obtaining the CSRF token created for the session.

I have added a hidden field in the web page that has the value of the received CSRF token.
Once the HTML form is submitted to the action, in the server side,extract the

received CSRF token value and check if it is the correct token issued for the particular session.obtain the session cookie and get the corresponding CSRF token for the session and compare that with the received token value. If the received CSRF token is valid, show success message. If not show error message.
You can find the Source code from here.

Thursday, April 19, 2018

Bypassing Local Windows Authentication to Defeat Full Disk Encryption

Full Disk Encryption
• A scheme for protecting data at rest. Encrypts an entire disk or volume.
• Mitigates the impact of a threat with physical access; generally does not provide protection against remote adversaries.
• Encrypts everything, often including the OS.

Microsoft BitLocker
• BitLocker is Microsoft's proprietary full-disk encryption feature.
• Built into all professional/enterprise versions of Windows since Vista.
• Uses the system's Trusted Platform Module (TPM) to store the master encryption key.

What is a TPM?
• A TPM is a hardware module responsible for performing cryptographic operations, performing attestation, and storing secrets.
• It has fairly general APIs, so how it is used is mostly up to applications.
• Example applications include remote attestation, and storing encryption keys.

Storing Secrets on a TPM
• A TPM contains several Platform Configuration Registers (PCRs).
• Starting with the BIOS (which is assumed to be trusted), the next part of the boot process (e.g. the MBR) is hashed and this value is stored in the a PCR.
• Each stage of the boot process is responsible for hashing the next and storing it in a PCR.

Storing Secrets on a TPM
• A boot, the TPM has a zero in all PCR registers.
• Whenever the TPM is told to update a register r with a value v, it always sets: r = HASH (r | v)
• So PCR values can never get set directly, only appended to. Arbitrary PCR values cannot be spoofed.
• This means a set of values in the PCRs can only be replicated by having that same boot chain.

Storing Secrets on a TPM
• When the TPM stores a secret key, that key can be sealed. When a key is sealed, the TPM references the current value of the PCRs.
• An API call to unseal that key will fail unless the current PCR values match the original values from when the key was sealed.
• So effectively, only the original boot process will be able to retrieve that secret key.

Transparent BitLocker
• BitLocker, in addition to the TPM, can optionally require a PIN or a key saved on a USB drive.
• However, it’s recommended configuration works transparently. It seals the secret key in the TPM and only BitLocker can retrieve it.
• Your computer boots up to a login screen as usual, with no indication that FDE is enabled.


Attacks Given Physical Access
• Known Hardware Attacks
            – Attack the TPM (grounding control pins)
            – Do a cold-boot attack to get the key from RAM
• Attack an early part of the boot chain
            – Flash the BIOS/EFI with a custom image
            – Look for a defect in the BIOS, MBR, or boot loader

• Or see we can attack the OS itself and see if Windows will give us the key...

Booting Up With BitLocker


Local Windows Authentication
• The Local Security Authority (LSA) manages authentication, usually using a Security Subsystem Provider (SSP).
• For a client-domain authentication, the Kerberos SSP exchanges messages with the Domain Controller (DC).
            – When attacking FDE, we have physical access. So we control the network and can run a “mock” DC.

Windows Domain Authentication
• Requests a session ticket (TGT) from the DC.
            – The TGT includes a secret key S, encrypted by the DC with the saved user password. Login screen decrypts S using the typed password.


Windows Domain Authentication
• TGT and S are used to request a service ticket T from the DC for the target service (in this case, the local workstation).
            – The local workstation verifies T

Machine Passwords
• When a workstation first joins a domain...
            – A secret key is generated, called the machine password.
            – This password is sent to the DC, so they have a shared secret for future communication.
• To grant access to the workstation, the login process must present a valid service ticket T.
            – This ticket is signed using the machine password.
            – Which we don't have...

If the DC uses the wrong machine password


The Local Credentials Cache
• A user can login when the DC isn’t available
            – Like when you’re using your laptop at a conference during someone’s talk…
• The cache is usually updated whenever the workstation sees the credentials are changed.
            – So it's updated when you successfully login and were authenticating against the DC.
            – Also updated when you change your domain password.

Too Bad We Can't Change the Password On the Login Screen


Password Reset


Poisoned Credentials Cache



Poisoned Credentials Cache


What Now?
• Dump the BitLocker key from kernel memory
            – As long as the domain account is a local admin
            – Although at this point you already have access to all the local user files, so it's pretty moot.
• Just dig through personal data
            – Saved passwords, Outlook emails, source code…
            – Drop in a trojan / backdoor, or whatever other malware you like.

System Configurations Effected
• Applies to any computer with:
            – BitLocker without pre-boot authentication
            – Attached to a domain
            – With a least one person having logged in with a domain account.
• Tested on Windows Vista, Windows 7, and Windows 8.1, Windows 10.
            – (Also Windows XP and Windows 2000)

How Else Does This Attack Apply?
• This isn't really BitLocker specific. More generally, this is an authentication bypass for domain accounts.
• If someone is logged in, locks their screen, and steps away, you could use this to unlock the PC.
            – Someone on their laptop at a coffee shop.
            – A computer in an office.

Impact and Mitigation
• This is 100% reliable attack, software-only, low sophistication, and takes a matter of seconds.
• You could use BitLocker with pre-boot authentication (i.e. using a PIN or USB key)
• You could use a BIOS password on boot
• Microsoft is releasing an update to address the issue. Expected release is November 10.
– ACK to the Microsoft Security Response Center

Reflections: Why Does This Work?
• The protocol for password changes was written in RFC 3244 for Windows 2000, publish in 2002.
• At that point, local access was total access. Local access wasn’t a valid threat model during protocol design.
• But local access is precisely the threat model under which FDE is applicable.

Black Hat Sound Bytes
• A defect in Windows domain authentication means BitLocker Full Disk Encryption can be bypassed; the attack is fast and non-technical.
• Microsoft is releasing a patch for the issue (expected November 10). Make sure all your workstations are up-to-date!
• Threat models change; when they do, you need to re-evaluate previous security choices.


Tuesday, April 10, 2018

How Bitcoin Mining/Block Rewards Work


Many people new to Bitcoin in 2018 are just buying and holding it, but quite a few are getting involved with Bitcoin mining.

In this guide we're going to explain how Bitcoin mining rewards work, covering with what a block reward, how it's calculated/created, and how money is split between mining pools/individual miners.

There are two aspects of mining where you get money, the block reward and transaction fees. The block reward part is often called 'coinbase', so you may see these terms used interchangably - not to be confused with the Coinbase exchange. Both of these rewards are given in Bitcoin.

What are Block Rewards?

A Bitcoin block is 1MB in size, and Bitcoin transactions are stored inside these blocks (each time someone sends Bitcoin, a new transaction is added). If a miner mines a new block, they're given a reward in the form of the block reward (coinbase). This is the main incentive for Bitcoin miners, as the block reward is 12.5 BTC as of writing this, or around $150,000, a significant amount of money.

The block reward is halved every 210,000 blocks, which is approximately every 4 years. You can see Bitcoin's code for this here. When Bitcoin was created the Block reward used to be 50 Bitcoin, and is now 12.5 BTC. This decrease in block reward means that over time less and less new Bitcoin are created, which combined with increased demand is theorised to keep pushing Bitcoin's price up - so in principle the USD value of the block reward should be similar in 10 years time. When the block reward has halfed 64 times, the block reward becomes 0.

This block reward has to be claimed by miners, where they add it as the first transaction on a block. It has no inputs, but has an output to the miner's wallet address. Here is an example on Block Explorer (it should be the first transaction in the list).

What are Transaction Fee Rewards?

When sending Bitcoin, a fee needs to be paid by users - called a transaction fees. This exists to incentivise miners to include transactions in mined blocks. It's effectively a bidding war to get your transaction into a block, where whoever pays the highest fee is processed first. A side effect of high demand for sending Bitcoin is more transactions being sent, and higher fees.

This transaction fee is given to miners, so essentially - the more congested the Bitcoin network, the more money miners earn. This fee is essentially an extra payment sent with any Bitcoin transaction, and can be worked out by subtracting the outputs from the inputs of a transaction. As the block reward (coinbase) reduces over time, if Bitcoin price doesn't increase at the same rate - these fees can provide an incentive for miners to continue mining.

How do pools distribute rewards?

So when you start mining, you might have a dream of getting say 13-14 BTC in a week. You need to be aware that there is a huge number of people competing to create new blocks. By creating a new mining pool by yourself, the chance of getting this block reward is extremely low - although if you did get it by chance, you'd get a significant reward. Instead, most miners join an existing mining pool - where they'd get a more steady income rather than having to wait years for a block reward to themself. Mining pools are large groups of miners, where if any one of them creates a new block - the reward is shared based on how much work each miner contributed.

Work is defined in hash power or hashrate, which in general means how many guesses can be made per second for the required hash. The split between miners differs between mining pools, we're going to use Slushpool as an example in this guide - but you can see how other pools work here.

How does Slushpool distribute rewards?

Slushpool, which has 11.1% of Bitcoin's total hashpower at the time of writing this (January 25th 2018), distributes rewards based on its miners submitting proof of the work they're doing. For example if the goal is a hash that consists of 18 zeros, a miner can submit any time after they've found the first 8 - which would prove that they've done work to get this far.They'd need to get all 18 zeros to win the block, but it would at least prove the miner is putting the effort in - and so they should be rewarded for it. The split is counted by the amount of work they have proved vs the total work proven by all the miners in the pool.

Lets step back a moment though, now that we know how much work everyone's done - how is the reward distributed? The block reward for the miner who was lucky enough to find it would be very large, a lot more than the miner will see as a return from the pool in the short term. What stops the miner taking that reward and leaving as if they were in their own pool? Well the blocks are pre-built by the pool. Everything except the nonce (the value in the block that miners change to get a hash with a certain amount of preceding zeros) must stay the same. One would assume that the pool can then just verify the nonce, and rewards wouldn't be awarded if the user changes the address (as the hash won't pass when being verified by the pool) - incentivising miners to follow the pool's rules (although we are yet to find documentation on this).

How are Rewards Split Between Pools?

This part is nice and simple. Whichever pool guesses a Block's hash first wins the Block reward. The more hashing power a pool has, the higher the probability that the pool will succeed. Extend this over a long period of time, then the reward split between pools should be similar to the share each pool has of total hashpower. Slushpool for example, which currently has 11.1% of hashpower - should receive around 11.1% of block rewards and 11.1% of transaction fees.



Sunday, April 8, 2018

Reverse Engineering

Reverse Engineering is the conversion of information from a low-level format, usually readable only by a computer, into a higher level format, which is easily readable by humans. Typical examples of reverse engineering tools are disassemblers and decompilers, which translate an object file produced by some compiler into an ASCII representation.



The reverse engineer can reuse the obtained code in his own programs or change an existing (already compiled) program to perform in other ways. He can use the knowledge obtained from reverse engineering to improve application programs, also known as bugs. But the most important is that one can get extremely useful ideas by observing how other programmers work and think, thus improve his skills and knowledge!

What comes in our minds when we hear RE, is cracking. Cracking is as old as the programs themselves. To crack a program, means to trace and use a serial number or any other kind of registration data, needed for the proper operation of a program. Therefore, if a shareware program (freely distributed, but with some difficulties, like crippled functions, nag screens or limited capabilities) needs a valid registration data, a reverse engineer can give that information by decompiling a particular part of the program.

In the past, many software companies have blamed others for doing RE in their products and stealing technology and knowledge. Reverse engineering is not limited to computer applications, the same happens with a car, weapons, hi-fi elements etc.




Tuesday, April 3, 2018

How To Apt-Get Update, Upgrade, Dist-Upgrade, Full-Upgrade and Their Similarities and Diffirencies

deb based distributions provides apt or apt-get to manage packages interactively and from network repositories. While updating packages update, upgrade or dist-upgrade can be used. But what is the difference between these two commands. In this tutorial we will look this issue.

Update

The real update operation will be down with upgrade command. This command will download packages and upgrade accordingly. So upgrade command will be run after update command. We should have root privileges in order to complete update operation so we will use sudo before upgrade command.

Upgrade

The real update operation will be down with upgrade command. This command will download packages and upgrade accordingly. So upgrade command will be run after update command. We should have root privileges in order to complete update operation so we will use sudo before upgrade command.

    upgrade is used to install the newest versions of all packages
    currently installed on the system from the sources enumerated in
    /etc/apt/sources.list. Packages currently installed with new
    versions available are retrieved and upgraded; under no
    circumstances are currently installed packages removed, or packages
    not already installed retrieved and installed. New versions of
    currently installed packages that cannot be upgraded without
    changing the install status of another package will be left at
    their current version. An update must be performed first so that
    apt-get knows that new versions of packages are available.

Dist-Upgrade

dist-upgrade command is very similar to upgrade command. This command will upgrade too but during upgrade there will be some prompts related with package configuration. In dist-upgrade this questions will be answered automatically by apt which will make our upgrade operation more easy and intelligent.


    dist-upgrade in addition to performing the function of upgrade,
    also intelligently handles changing dependencies with new versions
    of packages; apt-get has a "smart" conflict resolution system, and
    it will attempt to upgrade the most important packages at the
    expense of less important ones if necessary. So, dist-upgrade
    command may remove some packages. The /etc/apt/sources.list file
    contains a list of locations from which to retrieve desired package
    files. See also apt_preferences(5) for a mechanism for overriding
    the general settings for individual packages.

Full-Upgrade

full-upgrade  is the same as dist-upgrade so we can use both command interchangeable.


Get Unlimited Free Trials Using a "Real" Fake Credit Card Number

When I see the words "free trial," I know I'm probably going to have to whip out my credit card and enter in the number to ...