WannaCry a Birthday Gift??

I woke up on the 12th of May, it was my birthday, and I looked on the news feed and saw a burst of articles regarding the WannaCry Ransomware that has swept across the globe.

number20of20symantec20detections20for20wannacry20may201120to2015
In the last few days, a new type of malware called Wannacrypt has done worldwide damage.  It combines the characteristics of ransomware and a worm and has hit a lot of machines around the world from different enterprises or government organizations:

https://www.theregister.co.uk/2017/05/13/wannacrypt_ransomware_worm/

While everyone’s attention related to this attack has been on the vulnerabilities in Microsoft Windows XP, please pay attention to the following:

  • The attack works on all versions of Windows if they haven’t been patched since the March patch release!
  • The malware can only exploit those vulnerabilities it first has to get on the network.  There are reports it is being spread via email phishing or malicious web sites, but these reports remain uncertain.

 

Please take the following actions immediately:

  • Make sure all systems on your network are fully patched, particularly servers.
  • As a precaution, please ask all colleagues at your location to be very careful about opening email attachments and minimise browsing the web while this attack is on-going.

 

The vulnerabilities are fixed by the below security patches from Microsoft which was released in Mar of 2017, please ensure you have patched your systems:

https://technet.microsoft.com/en-us/library/security/ms17-010.aspx

Details of the malware can be found below.  The worm scans port TCP/445 which is the windows SMB services for file sharing:

https://securelist.com/blog/incidents/78351/wannacry-ransomware-used-in-widespread-attacks-all-over-the-world/

Preliminary study shows that our environment is not infected based on all hashes and domain found:

 

URL:

www.iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com

MD5 hash:

4fef5e34143e646dbf9907c4374276f5
5bef35496fcbdbe841c82f4d1ab8b7c2
775a0631fb8229b2aa3d7621427085ad
7bf2b57f2a205768755c07f238fb32cc
7f7ccaa16fb15eb1c7399d422f8363e8
8495400f199ac77853c53b5a3f278f3e
84c82835a5d21bbcf75a61706d8ab549
86721e64ffbd69aa6944b9672bcabb6d
8dd63adb68ef053e044a5a2f46e0d2cd
b0ad5902366f860f85b892867e5b1e87
d6114ba5f10ad67a4131ab72531f02da
db349b97c37d22f5ea1d1841e3c89eb4
e372d07207b4da75b3434584cd9f3450
f529f4556a5126bba499c26d67892240

 

Per Symantec, here is a full list of the filetypes that are targeted and encrypted by WannaCry:

  • .123
  • .3dm
  • .3ds
  • .3g2
  • .3gp
  • .602
  • .7z
  • .ARC
  • .PAQ
  • .accdb
  • .aes
  • .ai
  • .asc
  • .asf
  • .asm
  • .asp
  • .avi
  • .backup
  • .bak
  • .bat
  • .bmp
  • .brd
  • .bz2
  • .cgm
  • .class
  • .cmd
  • .cpp
  • .crt
  • .cs
  • .csr
  • .csv
  • .db
  • .dbf
  • .dch
  • .der
  • .dif
  • .dip
  • .djvu
  • .doc
  • .docb
  • .docm
  • .docx
  • .dot
  • .dotm
  • .dotx
  • .dwg
  • .edb
  • .eml
  • .fla
  • .flv
  • .frm
  • .gif
  • .gpg
  • .gz
  • .hwp
  • .ibd
  • .iso
  • .jar
  • .java
  • .jpeg
  • .jpg
  • .js
  • .jsp
  • .key
  • .lay
  • .lay6
  • .ldf
  • .m3u
  • .m4u
  • .max
  • .mdb
  • .mdf
  • .mid
  • .mkv
  • .mml
  • .mov
  • .mp3
  • .mp4
  • .mpeg
  • .mpg
  • .msg
  • .myd
  • .myi
  • .nef
  • .odb
  • .odg
  • .odp
  • .ods
  • .odt
  • .onetoc2
  • .ost
  • .otg
  • .otp
  • .ots
  • .ott
  • .p12
  • .pas
  • .pdf
  • .pem
  • .pfx
  • .php
  • .pl
  • .png
  • .pot
  • .potm
  • .potx
  • .ppam
  • .pps
  • .ppsm
  • .ppsx
  • .ppt
  • .pptm
  • .pptx
  • .ps1
  • .psd
  • .pst
  • .rar
  • .raw
  • .rb
  • .rtf
  • .sch
  • .sh
  • .sldm
  • .sldx
  • .slk
  • .sln
  • .snt
  • .sql
  • .sqlite3
  • .sqlitedb
  • .stc
  • .std
  • .sti
  • .stw
  • .suo
  • .svg
  • .swf
  • .sxc
  • .sxd
  • .sxi
  • .sxm
  • .sxw
  • .tar
  • .tbk
  • .tgz
  • .tif
  • .tiff
  • .txt
  • .uop
  • .uot
  • .vb
  • .vbs
  • .vcd
  • .vdi
  • .vmdk
  • .vmx
  • .vob
  • .vsd
  • .vsdx
  • .wav
  • .wb2
  • .wk1
  • .wks
  • .wma
  • .wmv
  • .xlc
  • .xlm
  • .xls
  • .xlsb
  • .xlsm
  • .xlsx
  • .xlt
  • .xltm
  • .xltx
  • .xlw
  • .zip

As you can see, the ransomware covers nearly any important file type a user might have on his or her computer. It also installs a text file on the user’s desktop with the following ransom note:

2cry

Advertisements

Predictive learning problems

From my previous post How to Teach a Computer to Distinguish Cats from Dogs

Predictive learning problems constitute the majority of tasks machine learning can
be used to solve today. Applicable to a wide array of situations and data types, in
this section we introduce the two major predictive learning problems: regression and
classification.

Regression

Suppose we wanted to predict the share price of a company that is about to go public (that is, when a company first starts offering its shares of stock to the public). Following the pipeline discussed in Section 1.1.1, we first gather a training set of data consisting of a number of corporations (preferably active in the same domain) with known share prices. Next, we need to design feature(s) that are thought to be relevant to the task at

machine learning graph 1
Figure 1.7 (top left panel) A toy training dataset of ten corporations with their associated share price and revenue values. (top right panel) A linear model is fit to the data. This trend line models the overall trajectory of the points and can be used for prediction in the future as shown in the bottom left and bottom right panels. hand. The company’s revenue is one such potential feature, as we can expect that the higher the revenue the more expensive a share of stock should be.2 Now in order to connect the share price to the revenue, we train a linear model or regression line using our training data.

The top panels of Fig. 1.7 show a toy dataset comprising share price versus revenue
information for ten companies, as well as a linear model fit to this data. Once the model
is trained, the share price of a new company can be predicted based on its revenue, as
depicted in the bottom panels of this figure. Finally, comparing the predicted price to the
actual price for a testing set of data we can test the performance of our regression model
and apply changes as needed (e.g., choosing a different feature). This sort of task, fitting
a model to a set of training data so that predictions about a continuous-valued variable
(e.g., share price) can be made, is referred to as regression.We now discuss some further
examples of regression.

 

Example 1.1 The rise of student loan debt in the United States

Figure 1.8 shows the total student loan debt, that is money borrowed by students to pay for college tuition, room, and board, etc., held by citizens of the United States from 2006 to 2014, measured quarterly. Over the eight year period reflected in this plot total student debt has tripled, totaling over one trillion dollars by the end of 2014. The regression line (in magenta) fit this dataset represents the data quite well and, with its sharp positive slope, emphasizes the point that student debt is rising dangerously fast. Moreover, if this trend continues, we can use the regression line to predict that total student debt will reach a total of two trillion dollars by the year 2026.

Figure 1.8
Figure 1.8  Total student loan debt in the United States measured quarterly from 2006 to 2014. The rapid increase of the debt, measured by the slope of the trend line fit to the data, confirms the concerning claim that student debt is growing (dangerously) fast. The debt data shown in this figure was taken from [46].

Example 1.2 Associating genes with quantitative traits

Genome-wide association (GWA) studies (Fig. 1.9) aim at understanding the connections between tens of thousands of genetic markers, taken from across the human genome of numerous subjects, with diseases like high blood pressure/cholesterol, heart disease, diabetes, various forms of cancer, and many others [26, 76, 80]. These studies are undertaken with the hope of one day producing gene-targeted therapies, like those used to treat diseases caused by a single gene (e.g., cystic fibrosis), that can help individuals with these multifactorial diseases. Regression as a commonly employed tool in GWA studies is used to understand complex relationships between genetic markers (features) and quantitative traits like the level of cholesterol or glucose (a continuous output variable).

Figure 1.9
Figure 1.9 – Conceptual illustration of a GWA study employing regression, wherein a quantitative trait is to be associated with specific genomic locations.

Classification

The machine learning task of classification is similar in principle to that of regression. The key difference between the two is that instead of predicting a continuous-valued output (e.g., share price, blood pressure, etc.), with classification what we aim at predicting takes on discrete values or classes. Classification problems arise in a host of forms. For example, object recognition, where different objects from a set of images are distinguished from one another (e.g., handwritten digits for the automatic sorting of mail or street signs for semi-autonomous and self-driving cars), is a very popular classification problem. The toy problem of distinguishing cats from dogs discussed How to Teach a Computer to Distinguish Cats from Dogs in  was such a problem. Other common classification problems include speech recognition (recognizing different spoken words for voice recognition systems), determining the general sentiment of a social network like Twitter towards a particular product or service, as well as determining what kind of hand gesture someone is making from a finite set of possibilities (for use in e.g., controlling a computer without a mouse). Geometrically speaking, a common way of viewing the task of classification is one of finding a separating line (or hyperplane in higher dimensions) that separates the two

Figure 1.10
Figure 1.10 – (top left panel) A toy 2-dimensional training set consisting of two distinct classes, red and blue. (top right panel) A linear model is trained to separate the two classes. (bottom left panel) A test point whose class is unknown. (bottom right panel) The test point is classified as blue since it lies on the blue side of the trained linear classifier.

classes of data from a training set as best as possible. This is precisely the perspective on classification we took in describing the toy example in Section 1.1, where we used a line to separate (features extracted from) images of cats and dogs. New data from a testing set is then automatically classified by simply determining which side of the line/hyperplane the data lies on. Figure 1.10 illustrates the concept of a linear model or classifier used for performing classification on a 2-dimensional toy dataset.

 

Example 1.3 Object detection

Object detection, a common classification problem, is the task of automatically identifying a specific object in a set of images or videos. Popular object detection applications include the detection of faces in images for organizational purposes and camera focusing, pedestrians for autonomous driving vehicles,4 and faulty components for automated quality control in electronics production. The same kind of machine learning framework, which we highlight here for the case of face detection, can be utilized for solving many such detection problems.
After training a linear classifier on a set of training data consisting of facial and nonfacial images, faces are sought after in a new test image by sliding a (typically) square window over the entire image. At each location of the sliding window, the image content inside is tested to see which side of the classifier it lies on (as illustrated in Fig. 1.11). If the (feature representation of the) content lies on the “face side” of the classifier the content is classified as a face.

figure 1.11
Figure 1.11 – To determine if any faces are present in a test image (in this instance an image of the Wright brothers, inventors of the airplane, sitting together in one of their first motorized flying machines in 1908) a small window is scanned across its entirety. The content inside the box at each instance is determined to be a face by checking which side of the learned classifier the feature representation of the content lies. In the figurative illustration shown here the area above and below the learned classifier (shown in black on the right) are the “face” and “non-face” sides of the classifier, respectively.

 

Next up will be Feature designs.

Tips to a faster computer

The Geekiest One

Does your computer feel slower than it used to be? Does it take longer to start up or for programs to load? If so, chances are your computer has accumulated some “digital dust” and needs a little spring cleaning.
To better understand what causes your computer to slow down over time (and what you can do about it), here are ten sources of “digital dust.” The tips are based on a blog post Agent Wiebusch did a couple years ago on reasons your computer may be running slow. I have updated the advice a bit.
1) Too many programs running at the same time.Over the lifespan of a computer it is common for users to download programs, applications and other data that ends up “running in the background.” Many of these programs start automatically and you may not be aware they are open. The more things that run in…

View original post 980 more words

How to Teach a Computer to Distinguish Cats from Dogs

To teach a child the difference between “cat” versus “dog”, parents (almost!) never give their children some kind of formal scientific definition to distinguish the two; i.e., that a dog is a member of Canis Familiaris species from the broader class of Mammalia, and that a cat while being from the same class belongs to another species known as Felis Catus. No, instead the child is naturally presented with many images of what they are told are either “dogs” or “cats” until they fully grasp the two concepts. How do we know when a child can successfully distinguish between cats and dogs? Intuitively, when they encounter new (images of) cats and dogs, and can correctly identify each new example. Like human beings, computers can be taught how to perform this sort of task in a similar manner. This kind of task, where we aim to teach a computer to distinguish between different types of things, is referred to as a classification problem in machine learning.

1. Collecting Data

Like human beings, a computer must be trained to recognize the difference between these two types of an animal by learning from a batch of examples typically referred to as a training set of data.

cats and dogs.PNG
Figure 1.1

Figure 1.1 shows such a training set consisting A training set of six cats (left panel) and six dogs (right panel). This set is used to train a machine learning model that can distinguish between future images of cats and dogs. The images in this figure were taken from [31]. of a few images of different cats and dogs. Intuitively, the larger and more diverse the training set the better a computer (or human) can perform a learning task since exposure to a wider breadth of examples gives the learner more experience.

2. Designing features

Think for a moment about how you yourself tell the difference between images containing cats from those containing dogs. What do you look for in order to tell the two apart? You likely use color, size, the shape of the ears or nose, and/or some combination of these features in order to distinguish between the two. In other words, you do not just look at an image as simply a collection of many small square pixels. You pick out details, or features, from images like these in order to identify what it is you are looking at. This is true for computers as well. In order to successfully train a computer to perform this task (and any machine learning task more generally) we need to provide it with properly designed features or, ideally, have it find such features itself. This is typically not a trivial task, as designing quality features can be very application dependent. For instance, a feature like “number of legs” would be unhelpful in discriminating between cats and dogs (since they both have four!), but quite helpful in telling cats and snakes apart. Moreover, extracting the features from a training dataset can also be challenging. For example, if some of our training images were blurry or taken from a perspective where we could not see the animal’s head, the features we designed might not be properly extracted.
However, for the sake of simplicity with our toy problem here, suppose we can easily extract the following two features from each image in the training set:

1. size of nose, relative to the size of the head (ranging from small to big);
2. shape of ears (ranging from round to pointy).

Examining the training images shown in Figure 1.1, we can see that cats all have small noses and pointy ears, while dogs all have big noses and round ears. Notice that with the current choice of features each image can now be represented by just two numbers:

cats and dogs 2
Figure 1.2

Feature space representation of the training set where the horizontal and vertical axes represent the features “nose size” and “ear shape” respectively. The fact that the cats and dogs from our training set lie in distinct regions of the feature space reflects a good choice of features.

A number expressing the relative nose size, and another number capturing the pointiness or round-ness of ears. Therefore we now represent each image in our training set in a 2-dimensional feature space where the features “nose size” and “ear shape” are the horizontal and vertical coordinate axes respectively, as illustrated in Figure 1.2. Because our designed features distinguish cats from dogs in our training set so well the feature representations of the cat images are all clumped together in one part of the space, while those of the dog images are clumped together in a different part of the space.

3. Training a Model

Now that we have a good feature representation of our training data the final act of teaching a computer how to distinguish between cats and dogs is a simple geometric problem: have the computer find a line or linear model that clearly separates the cats from the dogs in our carefully designed feature space.1 Since a line (in a 2-dimensional space) has two parameters, a slope, and an intercept, this means finding the right values for both. Because the parameters of this line must be determined based on the (feature representation) of the training data the process of determining proper parameters, which relies on a set of tools known as numerical optimization, is referred to as the training of a model.

Figure 1.3 shows a trained linear model (in black) which divides the feature space into cat and dog regions. Once this line has been determined, any future image whose feature representation lies above it (in the blue region) will be considered a cat by the computer, and likewise, any representation that falls below the line (in the red region) will be considered a dog.

cats and dogs 3
Figure 1.3

A trained linear model (shown in black) perfectly separates the two classes of animal present in the training set. Any new image received in the future will be classified as a cat if its feature representation lies above this line (in the blue region), and a dog if the feature representation lies below this line (in the red region).

cats and dogs 4.PNG
Figure 1.4

A testing set of cat and dog images also taken from [31]. Note that one of the dogs, the Boston terrier on the top right, has both a short nose and pointy ears. Due to our chosen feature representation, the computer will think this is a cat!

4. Testing the Model

To test the efficacy of our learner we now show the computer a batch of previously unseen images of cats and dogs (referred to generally as a testing set of data) and see how well it can identify the animal in each image. In Figure 1.4 we show a sample testing set for the problem at hand, consisting of three new cat and dog images. To do this we take each new image, extract our designed features (nose size and ear shape), and simply check which side of our line the feature representation falls on. In this instance, as can be seen in Figure. 1.5 all of the new cats and all but one dog from the testing set have been identified correctly.

cats and dogs 5
Fure 1.5

Identification of (the feature representation of) our test images using our trained linear model. Notice that the Boston terrier is misclassified as a cat since it has pointy ears and a short nose, just like the cats in our training set.

The misidentification of the single dog (a Boston terrier) is due completely to our choice of features, which we designed based on the training set in Fig. 1.1. This dog has been misidentified simply because of its features, a small nose, and pointy ears, match those of the cats from our training set. So while it first appeared that a combination of nose size and ear shape could indeed distinguish cats from dogs, we now see that our training set was too small and not diverse enough for this choice of features to be completely effective.

To improve our learner we must begin again. First, we should collect more data, forming a larger and more diverse training set. Then we will need to consider designing more discriminating features (perhaps eye color, tail shape, etc.) that further help distinguish cats from dogs. Finally, we must train a new model using the designed features, and test it, in the same manner, to see if our new trained model is an improvement over the old one.

Review

Let us now briefly review the previously described process, by which a trained model
was created for the toy task of differentiating cats from dogs. The same process is used
to perform essentially all machine learning tasks, and therefore it is worthwhile to pause
for a moment and review the steps taken in solving typical machine learning problems.
We enumerate these steps below to highlight their importance, which we refer to all
together as the general pipeline for solving machine learning problems, and provide a
the picture that compactly summarizes the entire pipeline in Figure 1.6.

cats and dogs 6
Figure 1.6

The learning pipeline of the cat versus dog classification problem. The same general pipeline is used for essentially all machine learning problems.

0 Define the problem. What is the task we want to teach a computer to do?
1 Collect data. Gather data for training and testing sets. The larger 
and more diverse the data the better.
2 Design features. What kind of features best describe the data?
3 Train the model. Tune the parameters of an appropriate model on the
training data using numerical optimization.
4 Test the model. Evaluate the performance of the trained model on the 
testing data. If the results of this evaluation are poor, re-think 
the particular features usedand gather more data if possible.

Starting my Udacity Journey

Udacity progress.PNG

Today I have decided to start my Distance learning Journey with Udacity, I have worked my way through numerous MOOC sites, such as FutureLearn, Open, Edx and COursera and have not looked at Udacity until now.

Each program Udacity offers is designed to help you achieve goals, meet objectives, and succeed in your life and career. Whether you have a specific job in mind or want to learn specific skills, the best way to decide is to envision your desired outcome, and then select the path that will get you there. Sometimes this is easy—you want to build Android apps, you take the Android Developer program! But if you’re not sure, they can guide you in the right direction. Thier blog is a great resource for career pathing, and you can always email them at support@udacity.com. Where they will learn about your interests, your goals, and your experience, and make personalized recommendations on the best ways to move forward.

For starters, I will be doing the various free courses:

 

After that, I plan to take the Intro to Programming Nanodegree. Where I will gain a refresher on web development and get a solid foundation on python and it’s syntax. Through this course, I will also build a coding portfolio. This course currently costs $299 (normal price $399).

Once I complete this I will have a few options. I am leaning more towards Artificial Intelligence and Machine Learning Engineering.

Further Courses I am looking at:

Follow me on my journey and progress.

 

Along with all this, I will be doing side courses at Edx and Coursera, Continue blogging about cyber security and continue with my Open University Degree.

 

 

How to prepare for PWK/OSCP, a noob-friendly guide

Few months ago, I didn’t know what Bash was, only heard of SSH tunneling, no practical knowledge. I also didn’t like paying for the PWK lab time without using it, so I went through a number of resources till I felt ready for starting the course.

Warning: Don’t expect to be spoon-fed if you’re doing OSCP, you’ll need to spend a lot of time researching, neither the admins or the other students will give you answers easily.

1. PWK Syllabus
1.1 *nix and Bash
1.2 Basic tools
1.3 Passive Recon
1.4 Active Recon
1.5 Buffer Overflow
1.6 Using public exploits
1.7 File Transfer
1.8 Privilege Escalation
1.9 Client-Side Attacks
1.10 Web Application Attacks
1.11 Password Attacks
1.12 Port Redirection/Tunneling
1.13 Metasploit Framework
1.14 Antivirus Bypassing
2. Wargames
2.1 Over The Wire: Bandit
2.2 Over The Wire: Natas
2.3 Root-me.org
3. Vulnerable VMs

1. PWK Syllabus:

Simply the most important reference in the list, it shows the course modules in a detailed way. Entire preparation I did was based on it. Can be found here.

1.1 *nix and Bash:

You don’t need to use Kali Linux right away, a good alternative is Ubuntu till you get comfortable with Linux.

1. Bash for Beginners: Best Bash reference IMO.
2. Bandit on Over The Wire: Great start for people who aren’t used to using a terminal, aren’t familiar with Bash or other *nix in general. Each challenge gives you hints on which commands you can use, you need to research them.
3.  Explainshell: Does NOT replace man pages, but breaks down commands easily for new comers.

1.2 Basic tools:

You will use these tools a lot. Make sure you understand what they do and how you can utilize them.

Netcat: Most important tool in the entire course. Understand what it does, what options you have, difference between a reverse shell and a bind shell. Experiment a lot with it.
Ncat: Netcat’s mature brother, supports SSL. Part of Nmap.
Wireshark: Network analysis tool, play with it while browsing the internet, connecting to FTP, read/write PCAP files.
TCPdump: Not all machines have that cute GUI, you could be stuck with a terminal.

1.3 Passive Recon:

Read about the following tools/techniques, experiment as much as possible.

1. Google dorks
2. Whois
3. Netcraft
4. Recon-ng: Make sure you check the Usage guide to know how it works.

1.4 Active Recon:

  • Understand what DNS is, how it works, how to perform forward and reverse lookup, what zone transfers are and how to perform them. Great resource here.
  • Nmap: One of the most used tools during the course (if not the most). I’d recommend to start by reading the man pages, understand different scanning techniques and other capabilities it has (scripts, OS detection, Service detection, …)
  • Services enumeration: SMTP, SNMP, SMB, and a lot others. Don’t just enumerate them, understand what they’re used for and how they work.
  • Great list for enumeration and tools.

1.5 Buffer Overflow:

Most fun part in my opinion. There are countless resources on how to get started, I’d recommend Corelan’s series. You probably need the first part only for PWK.

1.6 Using public exploits:

Occasionally, you’ll need to use a public exploit, maybe even modify the shellcode or other parts. Just go to Exploit-db and pick one of the older more reliable exploits (FTP ones for example). The vulnerable version is usually present with the exploit code.

1.7 File Transfer:

Not every machine has netcat installed, you’ll need to find a way around it to upload exploits or other tools you need. Great post on this is here.

1.8 Privilege Escalation:

A never ending topic, there are a lot of techniques, ranging from having an admin password to kernel exploits. Great way to practice this is by using Vulnhub VMs for practice. Check my OSCP-like VMs list here.

Windows:Elevating privileges by exploiting weak folder permissions
Windows: Privilege Escalation Fundamentals
Windows: Windows-Exploit-Suggester
Windows: Privilege Escalation Commands
Linux: Basic Linux Privilege Escalation
Linux: linuxprivchecker.py
Linux: LinEnum
Practical Windows Privilege Escalation
MySQL Root to System Root with UDF

1.9 Client Side Attacks:

Try out the techniques provided in Metasploit Unleashed or an IE client side exploit.

1.10 Web Application Attacks

Another lengthy subject, understand what XSS is, SQL injection, LFI, RFI, directory traversal, how to use a proxy like Burp Suite. Solve as much as you can from Natas on Over The Wire. It has great examples on Code Injection, Session hijacking and other web vulnerabilities.

Key is research till you feel comfortable.

1.11 Password Attacks:

Understand the basics of password attacks, difference between online and offline attacks. How to use Hydra, JTR, Medusa, what rainbow tables are, the list goes on. Excellent post on this topic here.

1.12 Port redirection/tunneling:

Not all machines are directly accessible, some are dual homed, connected to an internal network. You’ll use such techniques a lot in non-public networks. This post did a great job explaining it.

1.13 Metasploit Framework:

Decided to skip this part, but if you still want to study it, check out Metasploit Unleashed course.

 

1.14 Antivirus Bypassing:

Skipped this part too.

2. Wargames

Use them as a prep for vulnerable machines.

2.1 Over The Wire: Bandit

Great start for people who aren’t familiar with Linux or Bash.

2.2 Over The Wire: Natas

Focused on web application, many challenges aren’t required for OSCP, but it helps for sure.

2.3 Root-me.org

Has great challenges on privilege escalation, SQL injection, Javascript obfuscation, password cracking and analyzing PCAP files

3. Vulnerable Machines

Boot-to-root VMs are excellent for pentesting, you import a VM, run it and start enumerating from your attacking machine. Most of them result in getting root access. Check the post on which machines are the closest to OSCP, there is also the https://lab.pentestit.ru/ .

Blog posts regarding my journey through Pentestit.ru.:

Pentestit Lab v10 – Introduction & Setup
Pentestit Lab v10 – The Mail Token
Pentestit Lab v10 – The Site Token

Automate Specific Routine Tasks – LazyScripts

This is a set of bash shell functions to simplify and automate specific routine tasks, as well as some more specialized ones.

Compatibility

  • RedHat Enterprise Linux 5+
  • CentOS 5+
  • Ubuntu 10.04+

Installation

Run this bash function as root:

function lsgethelper() {
        local LZDIR=/root/.lazyscripts/tools;
        if command -v git 2>&1 1>/dev/null; then
            if [[ -d ${LZDIR} ]]; then
                    cd "${LZDIR}" \
                    && git reset --hard HEAD \
                    && git clean -f	\
                    && git pull git://github.com/hhoover/lazyscripts.git master; \
            else
                    cd \
                    && git clone git://github.com/hhoover/lazyscripts.git "${LZDIR}";
            fi
            cd;
            source ${LZDIR}/ls-init.sh;
        else
            rm -rf "${LZDIR}"/lazyscripts-master
            cd "${LZDIR}" \
            && curl -L https://github.com/hhover/lazyscripts/archive/master.tar.gz | tar xvz
            cd;
            source "${LZDIR}"/lazyscripts-master/ls-init.sh;
        fi
}
lsgethelper && lslogin

Throw this in your .bashrc for extra credit.

Functions

Function Description
lsinfo Display useful system information
lsbwprompt Switch to a plain prompt.
lscolorprompt Switch to a fancy colorized prompt.
lsbigfiles List the top 50 files based on disk usage.
lsmytuner MySQL Tuner.
lshighio Reports stats on processes in an uninterruptable sleep state.
lsmylogin Auto login to MySQL
lsmyengines List MySQL tables and their storage engine.
lsmyusers List MySQL users and grants.
lsmycreate Creates a MySQL DB and MySQL user
lsmycopy Copies an existing database to a new database.
lsap Show a virtual host listing.
lsapcheck Verify apache max client settings and memory usage.
lsapdocs Prints out Apache’s DocumentRoots
lsapproc Shows the memory used by each Apache process
lsrblcheck Server Email Blacklist Check
lscloudkick Installs the Cloudkick agent
lsvhost Add an Apache virtual host
lsvsftpd Installs/configures vsftpd
lspostfix Set up Postfix for relaying email
lsparsar Pretty sar output
lslsync Install lsyncd and configure this server as a master
lswordpress Install WordPress on this server
lsdrupal Install Drupal 7 on this server
lswebmin Install Webmin on this server
lsvarnish Installs Varnish 3.0
lsconcurchk Show concurrent connections
lscrtchk Check SSL Cert/Key to make sure they match
lsrpaf Install mod_rpaf to set correct client IP behind a proxy.
lspma Installs phpMyAdmin
lsnginx Replace Apache with nginx/php-fpm
lshaproxy Install HAProxy on this server
lshppool Adds a new pool to an existing HAProxy config
lswhatis Output the script that would be run with a specific command.
lsnodejs Installs Node.js and Node Package Manager
lsapitools Installs API tools for Rackspace Cloud
lsrecap Installs Recap – https://github.com/rackerlabs/recap
lsnova Nova Configuration generator

Enjoy!

 

credit: https://github.com/RedRum69/lazyscripts 

Shadow Brokers Release Attack Tool Archives

On April 9 and April 14, 2017, the Shadow Brokers threat group released archives of attack tools and other information that it claims originated from the National Security Agency (NSA). The contents included exploits against Windows, Solaris, and other software from as early as 2008, as well as information about a campaign targeting EastNets, a SWIFT (Society for Worldwide Interbank Financial Telecommunication) Service Bureau.

Despite some reports that the archives contain exploits for unpatched Windows vulnerabilities, SecureWorks(R) Counter Threat Unit(TM) (CTU) researchers determined that there are no functional exploits against fully patched, supported Microsoft software. Several of the vulnerabilities were addressed in Microsoft Security Bulletin MS17-10, which was released as part of March’s patch cycle. Three other exploits target Windows XP, Vista, Server 2003, Server 2008, and IIS 6.x, but Microsoft does not plan to provide patches for these exploits as the products are no longer supported.

Two attack tools target unpatched vulnerabilities in current Solaris versions:

– EBBISLAND is a remote buffer overflow exploit against XDR code that targets any running RPC service.
– EXTREMEPAAR is a local privilege escalation exploit.

Another buffer overflow exploit named VIOLENTSPIRIT targets the ttsession daemon in Solaris 2.6-2.9. The archives also included exploits targeting less-common software such as Lotus Domino versions 6 and 7, Lotus cc:mail, RedFlag Webmail 4, Avaya Media Server, and phpBB.

According to the Shadow Brokers’ April 14 release, the PLATINUM COLONY threat group (also known as Equation Group) gained access to the EastNets network, monitored SWIFT transactions from a select number of targeted financial services institutions between March 2013 and at least October 2013, and had persistent and wide-ranging access to the EastNets network. CTU(TM) researchers assess with high confidence that PLATINUM COLONY is operated by a United States intelligence agency. The group has been active since at least 2001 and likely uses its sophisticated toolset for military espionage and national security objectives, rather than for economic espionage activities.

PowerPoint and Excel documents within the leaked files list SWIFT Alliance Access servers run by EastNets, and several of the servers are marked as compromised for data collection. There is no indication that networks and hosts operated by EastNet customers outside the EastNet environment were compromised, but SWIFT transactions in 2013 could have been monitored by an unauthorized party as they traversed EastNets servers. EastNets released a public statement saying, “The reports of an alleged hacker-compromised EastNets Service Bureau (ENSB) network is totally false and unfounded.” However, the documentation provided in the Shadow Brokers leak strongly suggests a compromise.

 

Recommended actions:

CTU researchers recommend that clients ensure that the MS17-10 security updates have been applied. In addition, clients should upgrade unsupported Windows operating systems and IIS web servers to a supported version and should restrict external access to RPC services on Solaris servers.

SecureWorks actions:

The CTU research team is investigating the feasibility of countermeasures to detect the published exploits.

Questions:

If you have any questions or concerns, please submit a ticket via the SecureWorks Client Portal.

 

References:

https://portal.secureworks.com/intel/tips/5233

https://portal.secureworks.com/intel/tips/5835

https://blogs.technet.microsoft.com/msrc/2017/04/14/protecting-customers-and-evaluating-risk

https://technet.microsoft.com/en-us/library/security/ms17-010.aspx

http://www.eastnets.com/news_events/news/17-04-14/No_credibility_to_the_online_claim_of_a_compromise_of_EastNets_customer_information_on_its_SWIFT_service_bureau.aspx

How to Sniff a Network: Decoding the IP Layer

 

Following from previous post, which can be found at: How to Sniff a Network : Packet Sniffing on Windows and Linux and How to Sniff a Network : Raw Sockets and Sniffing.

Sniffing one packet is not overly useful, so let’s add some functionality to process more packets and decode their contents.

In its current form, our sniffer receives all of the IP headers along with any higher protocols such as TCP, UDP, or ICMP. The information is packed into binary form, and as shown above, is quite difficult to understand. We are now going to work on decoding the IP portion of a packet so that we can pull useful information out such as the protocol type (TCP, UDP, ICMP), and the source and destination IP addresses. This will be the foundation for you to start creating further protocol parsing later on. If we examine what an actual packet looks like on the network, you will have an understanding of how we need to decode the incoming packets. Refer to Figure 3-1 for the makeup of an IP header.

Figure 3-1 Typical IPv4 header structure
Figure 3-1: Typical IPv4 header structure

We will decode the entire IP header (except the Options field) and extract the protocol type, source, and destination IP address. Using the Python ctypes module to create a C-like structure will allow us to have a friendly format for handling the IP header and its member fields. First, let’s take a look at the C definition of what an IP header looks like.

struct ip {
u_char ip_hl:4;
u_char ip_v:4;
u_char ip_tos;
u_short ip_len;
u_short ip_id;
u_short ip_off;
u_char ip_ttl;
u_char ip_p;
u_short ip_sum;
u_long ip_src;
u_long ip_dst;
}

You now have an idea of how to map the C data types to the IP header values. Using C code as a reference when translating to Python objects can be useful because it makes it seamless to convert them to pure Python. Of note, the ip_hl and ip_v fields have a bit notation added to them (the :4 part). This indicates that these are bit fields, and they are 4 bits wide. We will use a pure Python solution to make sure these fields map correctly so we can avoid having to do any bit manipulation. Let’s implement our IP decoding routine into sniffer_ip_header_decode.py as shown below.

import socket


import os
import struct
from ctypes import *

# host to listen on
host = "192.168.0.187"


# our IP header
u class IP(Structure):
_fields_ = [
("ihl", c_ubyte, 4),
("version", c_ubyte, 4),
("tos", c_ubyte),
("len", c_ushort),
("id", c_ushort),
("offset", c_ushort),
("ttl", c_ubyte),
("protocol_num", c_ubyte),
("sum", c_ushort),
("src", c_ulong),
("dst", c_ulong)
]


def __new__(self, socket_buffer=None):
return self.from_buffer_copy(socket_buffer)


def __init__(self, socket_buffer=None):


# map protocol constants to their names


self.protocol_map = {1:"ICMP", 6:"TCP", 17:"UDP"}


v # human readable IP addresses


self.src_address = socket.inet_ntoa(struct.pack("<L",self.src))
self.dst_address = socket.inet_ntoa(struct.pack("<L",self.dst))


# human readable protocol
try:
self.protocol = self.protocol_map[self.protocol_num]
except:
self.protocol = str(self.protocol_num)


# this should look familiar from the previous example
if os.name == "nt":
socket_protocol = socket.IPPROTO_IP
else:
socket_protocol = socket.IPPROTO_ICMP


sniffer = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket_protocol)


sniffer.bind((host, 0))
sniffer.setsockopt(socket.IPPROTO_IP, socket.IP_HDRINCL, 1)


if os.name == "nt":


sniffer.ioctl(socket.SIO_RCVALL, socket.RCVALL_ON)

try:


while True:


# read in a packet
w raw_buffer = sniffer.recvfrom(65565)[0]


# create an IP header from the first 20 bytes of the buffer
x ip_header = IP(raw_buffer[0:20])


# print out the protocol that was detected and the hosts
y print "Protocol: %s %s -> %s" % (ip_header.protocol, ip_header.src_¬
address, ip_header.dst_address)


# handle CTRL-C
except KeyboardInterrupt:

# if we're using Windows, turn off promiscuous mode
if os.name == "nt":
sniffer.ioctl(socket.SIO_RCVALL, socket.RCVALL_OFF)

The first step is defining a Python ctypes structure u that will map the first 20 bytes of the received buffer into a friendly IP header. As you can see, all of the fields that we identified and the preceding C structure match up nicely. The __new__ method of the IP class simply takes in a raw buffer (in this case, what we receive on the network) and forms the structure from it. When the __init__ method is called, __new__ is already finished processing the buffer. Inside __init__, we are simply doing some housekeeping to give some human readable output for the protocol in use and the IP addresses v.
With our freshly minted IP structure, we now put in the logic to continually read in packets and parse their information. The first step is to read in the packet w and then pass the first 20 bytes x to initialize our IP structure. Next, we simply print out the information that we have captured y.
Let’s try it out.

Kicking the Tires

Let’s test out our previous code to see what kind of information we are extracting from the raw packets being sent. I definitely recommend that you do this test from your Windows machine, as you will be able to see TCP, UDP, and ICMP, which allows you to do some pretty neat testing (open up a browser, for example). If you are confined to Linux, then perform the previous ping test to see it in action.

Open a terminal and type:

python sniffer_ip_header_decode.py

Now, because Windows is pretty chatty, you’re likely to see output immediately.
I tested this script by opening Internet Explorer and going to www
.google.com, and here is the output from our script:

Protocol: UDP 192.168.0.190 -> 192.168.0.1
Protocol: UDP 192.168.0.1 -> 192.168.0.190
Protocol: UDP 192.168.0.190 -> 192.168.0.187
Protocol: TCP 192.168.0.187 -> 74.125.225.183
Protocol: TCP 192.168.0.187 -> 74.125.225.183
Protocol: TCP 74.125.225.183 -> 192.168.0.187
Protocol: TCP 192.168.0.187 -> 74.125.225.183

Because we aren’t doing any deep inspection on these packets, we can only guess what this stream is indicating. My guess is that the first couple of UDP packets are the DNS queries to determine where google.com lives, and the subsequent TCP sessions are my machine actually connecting and downloading content from their web server. To perform the same test on Linux, we can ping google.com, and the results will look something like this:

Protocol: ICMP 74.125.226.78 -> 192.168.0.190
Protocol: ICMP 74.125.226.78 -> 192.168.0.190
Protocol: ICMP 74.125.226.78 -> 192.168.0.190

You can already see the limitation: we are only seeing the response and only for the ICMP protocol. But because we are purposefully building a host discovery scanner, this is completely acceptable. We will now apply the same techniques we used to decode the IP header to decode the ICMP messages. Stay tuned for the next post.

How to Sniff a Network: Packet Sniffing on Windows and Linux

Network sniffers allow you to see packets entering and exiting a target machine. As a result, they have many practical uses before and after exploitation. In some cases, you’ll be able to use Wireshark (https://wireshark.org/) to monitor traffic or use a Pythonic solution like Scapy (which we’ll explore in the later posts). Nevertheless, you can read more: How to Sniff a Network : Raw Sockets and Sniffing.

 

Accessing raw sockets in Windows is slightly different than on its Linux brethren, but we want to have the flexibility to deploy the same sniffer to multiple platforms. We will create our socket object and then determine which platform we are running on. Windows requires us to set some additional flags through a socket input/output control (IOCTL),1 which enables promiscuous mode on the network interface. In our first example, we simply set up our raw socket sniffer, read in a single packet, and then quit.

import socket
import os
# host to listen on
host = "192.168.0.196"

# create a raw socket and bind it to the public interface
if os.name == "nt":
u socket_protocol = socket.IPPROTO_IP
else:
socket_protocol = socket.IPPROTO_ICMP
sniffer = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket_protocol)
sniffer.bind((host, 0))

# we want the IP headers included in the capture
v sniffer.setsockopt(socket.IPPROTO_IP, socket.IP_HDRINCL, 1)

# if we're using Windows, we need to send an IOCTL
# to set up promiscuous mode
w if os.name == "nt":
sniffer.ioctl(socket.SIO_RCVALL, socket.RCVALL_ON)

# read in a single packet
x print sniffer.recvfrom(65565)

# if we're using Windows, turn off promiscuous mode
y if os.name == "nt":
sniffer.ioctl(socket.SIO_RCVALL, socket.RCVALL_OFF)

We start by constructing our socket object with the parameters necessary for sniffing packets on our network interface u. The difference between Windows and Linux is that Windows will allow us to sniff all incoming packets regardless of protocol, whereas Linux forces us to specify that we are sniffing ICMP. Note that we are using promiscuous mode, which requires administrative privileges on Windows or root on Linux. Promiscuous mode allows us to sniff all packets that the network card sees, even those not destined for your specific host. Next we set a socket option that includes the IP headers in our captured packets. The next step w is to determine if we are using Windows, and if so, we perform the additional step of sending an IOCTL to the network card driver to enable promiscuous mode. If you’re running Windows in a virtual machine, you will likely get a notification that the guest operating system is enabling promiscuous mode; you, of course, will allow it. Now we are ready to actually perform some sniffing, and in this case we are simply printing out the entire raw packet x with no packet decoding. This is just to test to make sure we have the core of our sniffing code working. After a single packet is sniffed, we again test for Windows, and disable promiscuous mode y before exiting the script.

Kicking the Tires

Open up a fresh terminal or cmd.exe shell under Windows and run the
following:

python sniffer.py

In another terminal or shell window, you can simply pick a host to ping.
Here, we’ll ping nostarch.com:

ping nostarch.com

In your first window where you executed your sniffer, you should see
some garbled output that closely resembles the following:

('E\x00\x00:\x0f\x98\x00\x00\x80\x11\xa9\x0e\xc0\xa8\x00\xbb\xc0\xa8\x0
0\x01\x04\x01\x005\x00&\xd6d\n\xde\x01\x00\x00\x01\x00\x00\x00\x00\x00\
x00\x08nostarch\x03com\x00\x00\x01\x00\x01', ('192.168.0.187', 0))

You can see that we have captured the initial ICMP ping request destined for nostarch.com (based on the appearance of the string nostarch.com).
If you are running this example on Linux, then you would receive the response from nostarch.com. Sniffing one packet is not overly useful, so let’s add some functionality to process more packets and decode their contents in the following post.

 

Advertisements

I post all things that interest me. Mainly computers.

%d bloggers like this: