Difference between revisions of "SOW"

From Makers Local 256
Jump to: navigation, search
m (Status: fixed)
(Features: Added some more information.)
Line 16: Line 16:
 
** User has three options when evaluating a signature trip:
 
** User has three options when evaluating a signature trip:
 
*** Good: The signature tripped on what it was designed to and the traffic is hostile in nature.
 
*** Good: The signature tripped on what it was designed to and the traffic is hostile in nature.
 +
**** Good signatures are rolled out into production at regular intervals.
 
*** Needs work: Either the signature tripped on what it was supposed to but the traffic isn't necessarily hostile, or the traffic is hostile but the signature could use some tightening.
 
*** Needs work: Either the signature tripped on what it was supposed to but the traffic isn't necessarily hostile, or the traffic is hostile but the signature could use some tightening.
 
**** A brief note will need to be provided with reasoning and/or suggestions.
 
**** A brief note will need to be provided with reasoning and/or suggestions.
*** False: The signature trips on normal/non-hostile traffic(false positives).
+
**** Signatures that need work are changed and put back in for more testing until they're Good.
 +
*** Bad: The signature trips on normal/non-hostile traffic(false positives).
 
**** A brief note will need to be provided with reasoning.
 
**** A brief note will need to be provided with reasoning.
** Will support multiple users.
+
**** Bad signatures are dropped permanently.
*** Simple metrics will be kept on user interaction with the system.
+
* Will support multiple users.
**** Shows how often certain users participate in the signature testing.
+
** Simple metrics will be kept on user interaction with the system.
**** Shows general differences in user analysis of similar signature trips.
+
*** Shows how often certain users participate in the signature testing.
** Reports can be generated at any point to determine the performance of any individual signature or as a whole.
+
*** Shows general differences in user analysis of similar signature trips.
*** or any individual user or as a whole for that matter.
+
**** Could reveal problems with how employees are working and how to fix them.
 +
* Reports can be generated at any point to determine the performance of any individual signature or as a whole.
 +
** or any individual user or as a whole for that matter.
 +
** Certain reports might need to be generated on a realtime basis to give a better overview of what's going on.

Revision as of 04:25, 22 August 2007

(Signatures Observed Working)

Overview

A simple voting system integrated into Snort signature test environment. Designed to prevent the rollout of bad signatures into a production environment. Easiest method to determine the validity of signatures with minimal impact on testers.

Status

Researching

What you'll need

  • Snort running on your choice of OS(preferably Linux, *BSD, or Solaris)
  • Web front-end for evaluating Snort signature trips.
    • Take your pick as to which one you want to use and mod it, or make your own.

Features

  • For each signature that trips on the test system the user is presented with a voting system similar to a simple poll in a blog.
    • User has three options when evaluating a signature trip:
      • Good: The signature tripped on what it was designed to and the traffic is hostile in nature.
        • Good signatures are rolled out into production at regular intervals.
      • Needs work: Either the signature tripped on what it was supposed to but the traffic isn't necessarily hostile, or the traffic is hostile but the signature could use some tightening.
        • A brief note will need to be provided with reasoning and/or suggestions.
        • Signatures that need work are changed and put back in for more testing until they're Good.
      • Bad: The signature trips on normal/non-hostile traffic(false positives).
        • A brief note will need to be provided with reasoning.
        • Bad signatures are dropped permanently.
  • Will support multiple users.
    • Simple metrics will be kept on user interaction with the system.
      • Shows how often certain users participate in the signature testing.
      • Shows general differences in user analysis of similar signature trips.
        • Could reveal problems with how employees are working and how to fix them.
  • Reports can be generated at any point to determine the performance of any individual signature or as a whole.
    • or any individual user or as a whole for that matter.
    • Certain reports might need to be generated on a realtime basis to give a better overview of what's going on.