Jump to content

Use Infoblox API to get all records from zones in a DNS view


Cowboy Denny
 Share

Recommended Posts

Ok my primary goal is to find any RFC1918 address space in my Public facing DNS View. So how do I do that? 

First in Infoblox I go to Data Management –> DNS –> Public DNS View

Click on Show Filter then

Click on first entry and change that to Type

Middle entry click on equals

Far right entry change to Authoritative  (don't care about delegated or forward zones)

It basically should look like the image below...

IB_Export_Authoritative_Zones.png.63fc125b90c2ce44696ebfbb426bb5fc.png

Click on Export (red arrow is pointing to Export) to download a csv with all the zones from your Public DNS View 

 

Open up that csv file and copy all the zones (domains) 

SSH into a linux box

I create a directory called rfc1918_20220505

Now in that directory, create a text file called zonelist.txt and paste all the authoritative zones you copied from that CSV export you did above (not worried about reverse zones)

NOTE: It's important you save all the domains in the file named as zonelist.txt since this filename is referenced in the script below so the name is important

Now create another file which I call rfc1918extract.sh and paste the below (edit the beginning part to match your environment)

#!/bin/bash -

# This script reads a file, xzonelist.txt, that contains a
# list of zone files to download.  A separate csv file will be
# created for each zone.  The csv file will be named the
# same as the zone name.  Minimal error checking is
# performed.  Use at your own risk.
# All files are located in the same directory as this script.

# Username and password with permission to download csv files
USERNAME="admin"
PASSWORD="Pa55w0rd"

# Grid Master
SERVER="gm.eventguyz.corp"

# Define file containing list of zones to export
ZONELIST="zonelist.txt"

# Define file that will contain results of curl command
OUTFILE="result.txt"

# Location of curl on this system.  Use -s so curl is silent
CURL="/usr/bin/curl -s"

# WAPI version
VERSION="v2.11.3"

# What view are these zone in? default maybe
VIEW="External"
#VIEW="default"

############################################
# No more variables to set below this line #
############################################

# Process the zonelist file one line at a time
while read ZONE
do

   echo
   echo
   echo
   echo
   echo
   echo "Processing zone:    $ZONE"

   # Create CSV file for this zone
/usr/bin/curl -s --tlsv1 --insecure --user admin:Pa55w0rd -H "Content-Type: application/json" -X POST https://$SERVER/wapi/$VERSION/fileop?_function=csv_export -d "{\"_object\":\"allrecords\",\"view\":\"$VIEW\",\"zone\":\"$ZONE\"}" > $OUTFILE

##   $CURL \
##      --tlsv1 \
##      --insecure \
##      --noproxy '*' \
##      --user "$USERNAME:$PASSWORD" \
##      -H "Content-Type: application/json" \
##      -X POST https://$SERVER/wapi/$VERSION/fileop?_function=csv_export \
##      -d "{\"_object\":\"allrecords\",\"view\":\"$VIEW\",\"zone\":\"$ZONE\"}" \
##      > $OUTFILE

   ERROR_COUNT=`grep -c Error $OUTFILE`
   echo "Error Count= $ERROR_COUNT"
   if [ $ERROR_COUNT -ne 0 ]; then
      # Display the error and skip rest of loop
      grep Error $OUTFILE
      continue
   fi

   # Get the "token" and "download URL" for later use
   TOKEN=`grep "token" $OUTFILE | cut -d"\"" -f4`

   URL=`grep "url" $OUTFILE | cut -d"\"" -f4`

   echo "Token:              $TOKEN"
   echo "URL:                $URL"

   # Download the CSV file
   echo "Download CSV file section"
   $CURL \
      --tlsv1 \
      --insecure \
      --noproxy '*' \
      -u "$USERNAME:$PASSWORD" \
      -H "Content-Type: application/force-download" \
      -O $URL

   # Rename CSV file so the file name matches the zone name
   echo "rename CSV file section for $ZONE"
   FILENAME=$ZONE".csv"
   echo "Filename with $ZONE:           $FILENAME"
   # Reverse zones will contain the / character which will be interpreted
   # as a directory delimiter if included in file name.  Replace with +
   FILENAME=`echo $FILENAME | tr \/ +`

   echo "Filename remove slashes:           $FILENAME"
   mv Zonechilds.csv $FILENAME
   echo "Filename after Zonechilds.csv mv:           $FILENAME"
   # Let NIOS know download is complete
   echo "Let NIOS know download is complete SECTION"
   $CURL \
      --tlsv1 \
      --insecure \
      --noproxy '*' \
      -u "$USERNAME:$PASSWORD" \
      -H "Content-Type: application/json" \
      -X POST https://$SERVER/wapi/$VERSION/fileop?_function=downloadcomplete \
      -d "{ \"token\": \"$TOKEN\"}"

done < "$ZONELIST"

exit

Make the file executable by running: chmod 755 rfc1918extract.sh

If you do a ls on the directory you should have two files like the one below

rfc1918_extract.png

make sure you are in the directory with your files

cd rfc1918_20220505

Now its time to just run the file: bash rfc1918extract.sh

Verify its working by seeing domain after domain go by on the screen.

Now this is going to take awhile

Once completed you will have domain.com.csv file for each zone that was in your zonelist.txt

 

Now you have choices on what you want to do with all these CSV files.

OPTION #1

Now in the same directory with all those csv files, if you wanted to find all the RFC1918 addresses you could run:

grep -E ',10\.|,172\.|192\.168\.' * | sort > /tmp/all-rfc1918-records.txt

This grep searches for multiple patterns. Since this is a comma separated file we can search for ,10. and ,172. and ,192.168.

 

OPTION #2

It's possible you want to combine all the CSV files into one master CSV which can be done using merge_csv.py

merge_csv (1).py

Download the file above (merge_csv (1).py) and rename to merge_csv.py and upload to your linux box in the directory all your csv files are

Now run the python script

python3 merge_csv.py

The script creates a combined csv for each record type

File Saved: /rfc1918_20220505/data_20220506/srvrecord_20220506.csv
File Saved: /rfc1918_20220505/data_20220506/txtrecord_20220506.csv
File Saved: /rfc1918_20220505/data_20220506/arecord_20220506.csv
File Saved: /rfc1918_20220505/data_20220506/cnamerecord_20220506.csv
File Saved: /rfc1918_20220505/data_20220506/hostaddress_20220506.csv
File Saved: /rfc1918_20220505/data_20220506/hostrecord_20220506.csv
File Saved: /rfc1918_20220505/data_20220506/mxrecord_20220506.csv
File Saved: /rfc1918_20220505/data_20220506/ptrrecord_20220506.csv
File Saved: /rfc1918_20220505/data_20220506/nsrecord_20220506.csv

Now you have the CSV files you can either just combine all files into one CSV

cat *csv > int.eventguyz.com_external.csv

OR I go into the data_date directory and run the following command to pull the rfc1918 addresses into one .csv

grep -E ',10\.|,172\.|192\.168\.' * | sort > all-rfc1918-records_$(date +%Y%m%d).csv

EXAMPLE INPUT (if needed)

csv_examples.zip

EXAMPLE OUTPUT (if needed)

csv_output_examples.zip

Link to comment
Share on other sites

For reference, I have received an error

[855/1304] File Processed: /home/dhosang/ddiextract/zahyield.com.csv
Traceback (most recent call last):
  File "merge_csv.py", line 51, in <module>
    processFile(directory + os.sep + file)
  File "merge_csv.py", line 45, in processFile
    data[header.index(my_tuple[0])].append([file.split(os.sep)[-1]] + line)
IndexError: list index out of range

missing a record

NOTE: If it doesn't seem to be running you gotta check a couple of things

  1. Are you running it with sudo (you need permission to create result.txt) and you'll get an error stating you don't have permissions
  2. Make sure Infoblox allows your IP address to access API
Link to comment
Share on other sites

 Share

×
×
  • Create New...