Tag Archives: PCI-DSS

PCI-DSS Requirement 6.2 Changes Impacts after 30 Jun 2012

PCI-DSS (Payment Card Industry Data Security Standard) version 2.0 is out since October 2010, but vulnerability ranking as defined in the 6.2.a testing procedure was considered as best practice until Jun 30, 2012, after which it becomes a requirement. The vulnerability ranking, how is now mandatory, will impact they way you classify your vulnerabilities, your change management process, your internal vulnerabilities assessment, your NTP monitoring, the way you document your hardening guides and your software development process.

Requirement 6.2

Just for recap, if you don’t know this requirement:

6.2 Establish a process to identify and assign a risk ranking to newly discovered security vulnerabilities.
Notes: Risk rankings should be based on industry best practices. For example, criteria for ranking "High" risk vulnerabilities may include a CVSS base score of 4.0 or above, and/or a vendor-supplied patch classified by the vendor as "critical", and/or a vulnerability affecting a critical system component.
Testing procedure 6.2.a: Interview responsible personnel to verify that processes are implemented to identify new security vulnerabilities, and that a risk ranking is assigned to such vulnerabilities. (At minimum, the most critical, highest risk vulnerabilities should be ranked as "High".

As described above, PCI-DSS requirement 6.2 request that a formal vulnerability management process exist in order to rank the risk of the discovered vulnerabilities, before assets are put in production. Also the process should include outside sources for security vulnerability information (from vendors and/or vulnerability scanning tools and/or external sources). The vulnerabilities risk ranking could be “Low“, “Medium” and “High“, but at minimum the most critical vulnerabilities should be ranked as “High“. As described in the “Notes” of 6.2 requirement you could use CVSS scores for you risk ranking mapping.

Using CVSS v2 scoring

As described by Thierry Zoller in his “CVSS – Common Vulnerability Scoring System – a critique [ Part1 ]” blog post, CVSS v2 is not perfect and need some enhancements, maybe in v3. In order to explain the impacts of CVSS scores on the risk ranking we will use CVE-2012-1875 as example.

The base CVSS score, on National Vulnerability Database, for CVE-2012-1875, one of vulnerabilities patched in MS12-037, is 9.3 (HIGH). But by doing a Vulnerability Management process you could increase or decrease the risk ranking.

In CVSS v2 “Base Score Metric“, actual “Attack complexity” for CVE-2012-1875 is by default defined at “Medium“, but since an exploit is available through Metasploit we could consider that the “Attack complexity” should be set to “Low” (The attack can be performed manually and requires little skill or additional information gathering).

In “Temporal Score Metrics“, I would define “Exploitability” to “High” cause the Metasploit module is working for Internet Explorer 8 over Windows XP SP3 and Windows 7 SP1/SP2. For “Remediation Level” set to “Official fix” and for “Report Confidence” to “Confirmed“. In my opinion “Remediation Level” is the most disturbing in CVSS v2, cause if a patch exist, it does not imply that you have actually plan to deploy it or have actually deploy it.

In “Environmental Score Metrics“, “Collateral Damage Potential” and “Target Distribution” are related to your organization and on the number of potential vulnerable assets.

Example 1: If your vulnerable assets have a “Low” “Collateral Damage Potential” and they represent only “0 to 25%” of the total assets, the overall CVSS score will be 2.2.

Example 2: If your vulnerable assets have a “High” “Collateral Damage Potential” and they represent only “26 to 75%” of the total assets, the the overall CVSS score will be 7.

Example 3: If your vulnerable assets have a “Low” “Collateral Damage Potential” and you don’t known the number of potential vulnerable assets, the overall CVSS score will be 9.4. This score demonstrates you the importance to have a good inventory of your assets.

You can also play with the CVSS v2 “Security Requirements” metrics how will allow you to customize the CVSS score depending of the importance of the affected assets in terms of confidentiality, integrity and availability.

I would recommend to organizations, how don’t have the resources (updated CMDB, automatic vulnerability scanner, etc) to customize the CVSS score, and also because CVSS v2 is not perfect, to only take the base CVSS score as the default value to do vulnerability ranking.

Requirement 2.2

Requirement 2.2 is also impacted by requirement 6.2. Just for recap, if you don’t know this requirement:

2.2 Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.
2.2.b Verify that system configuration standards are updated as new vulnerability issues are identified, as defined in Requirement 6.2

If a vulnerability is discovered or published, and one of your system component is vulnerable despite an industry-accepted hardening standard is configured, you will have to update, asap or before going in production, your configuration standard in order to avoid new vulnerabilities during installation of a new system component.

Requirement 6.5

Requirement 6.5 is also impacted by requirement 6.2. Just for recap, if you don’t know this requirement:

6.5 Develop applications based on secure coding guidelines. Prevent common coding vulnerabilities in software development processes, to include the following:
6.5.6 All "High" vulnerabilities identified in the vulnerability identification process (as defined in PCI DSS Requirement 6.2).

As described in the requirement a formal software development process should exist and include references to the vulnerability ranking with associated resolution processes. If a “High” vulnerability is discovered or published, and one of your application is vulnerable, you will have to correct or mitigate the vulnerability, asap or before going in production.

Requirement 10.4

Requirement 10.4 is also impacted by requirement 6.2. Just for recap, if you don’t know this requirement:

10.4 Using time-synchronization technology, synchronize all critical system clocks and times and ensure that the following is implemented for acquiring, distributing, and storing time.
10.4.a Verify that time-synchronization technology is implemented and kept current per PCI DSS Requirements 6.1 and 6.2.

Accurate time synchronization is an important part of the security process, and most of time this requirement is under estimated and risks associated to time synchronization defect are under evaluated. Each reported time synchronization vulnerabilities should be carefully evaluated and associated with a risk ranking. For example, if a server has an offset of more than x seconds then the event could be considered as a “High” vulnerability ranking. Another examples could be that the NTP client could no more connect to the NTP server, or that a configuration change has be done on the NTP client or server, etc.

Before Jun 30, it wasn’t mandatory to resolve “High” vulnerabilities related to time synchronization vulnerabilities. Now it is mandatory and you should correct it asap and validate the correction.

Requirement 11.2.1

Requirement 11.2.1 is also impacted by requirement 6.2. Just for recap, if you don’t know this requirement:

11.2.1 Perform quarterly internal vulnerability scans.
11.2.1.b Review the scan reports and verify that the scan process includes rescans until passing results are obtained, or all "High" vulnerabilities as defined in PCI DSS Requirement 6.2 are resolved.

As described in the requirement a formal quarterly scanning process should exist and include references to the vulnerability ranking with the requirement of a rescan until the vulnerability is resolved. Before Jun 30, it wasn’t mandatory to resolve “High” vulnerabilities after a quarterly internal vulnerability scan. Now it is mandatory and you should correct it asap and validate the correction by a rescan.

Requirement 11.2.3

Requirement 11.2.3 is also impacted by requirement 6.2. Just for recap, if you don’t know this requirement:

11.2.3 Perform internal and external scans after any significant change.
11.2.3.b Review scan reports and verify that the scan process includes rescans until: 
- For external scans, no vulnerabilities exist that are scored greater than a 4.0 by the CVSS, 
- For internal scans, a passing result is obtained or all "High" vulnerabilities as defined in PCI DSS Requirement 6.2 are resolved.

As described in the requirement a formal scanning process, after any significant change, should exist and include references to the vulnerability ranking with the requirement of a rescan until the vulnerability is resolved. Significant changes are defined by examples in requirement 11.2 : new system component installations, changes in network topology, firewall rule modifications, product upgrades.

Before Jun 30, it wasn’t mandatory to resolve “High” vulnerabilities introduced by a significant change and discovered by an internal vulnerability scan. Now it is mandatory and you should correct it asap and validate the correction by a rescan.

Conclusion

As you can see, requirement 6.2 has introduce tonnes of new requirements and you should plan asap, if you are PCI-DSS compliant, all required actions in order to comply with them.

Why And Howto Calculate Your Events Log Size

If you are projecting to start a Log or Event Management project, you will surely need to know your Normal Event log size (NE). These Normal Event log size (NE) value, combinated with the your Normal Events per second (NE) value and with your storage retention policy will help you to design in order to estimate your storage requirements.

Never forget that Log Management storage requirements are not the same for Event Management. Most of time Log Management storage requirements are higher than for Event Management. For example for Log Management, PCI-DSS v2.0 Req. 10.7 require 1 year retention :

10.7 Retain audit trail history for at least one year, with a minimum of three months immediately available for analysis (for example, online, archived, or restorable from back-up).

But in order to compensate PCI-DSS v2.0 Req. 10.6, you will maybe do Event Management with a SIEM (like ArcSight ESM, RSA enVision, QRadar SIEM, etc.).

10.6 Review logs for all system components at least daily. Log reviews must include those servers that perform security functions like intrusion-detection system (IDS) and authentication, authorization, and accounting protocol (AAA) servers (for
example, RADIUS). Note: Log harvesting, parsing, and alerting tools may be used to meet compliance with Requirement 10.6

You don’t need a SIEM to do Log Management, but you also don’t need to store 1 year of your logs on your SIEM solution. Long term retention, long term reporting, “raw” events forensics are mostly done on a Log Management infrastructure (like ArcSight Logger, QRadar Log Manager, Novell Sentinel Log Manager, etc.). Storage retention for your Event Management infrastructure will depend mostly on your correlation rules, your acknowledge time on a correlated event, the number of security analysts present in your SOC, etc.

Don’t imagine that a magic formula exist to define your events log size, some tools could help you, but you need to analyze your logs in order to have your Normal Event log size.  First of all you have to define your Log and/or Event Management scope, this scope could first be driven by regulations or compliances, but don’t forget that regulations or compliances are not Security. Also each technologies have different log sizes, an Apache HTTPD log will not have the same size than a SSHD log, and an Apache HTTPD log from server A will surely not have the same size than an Apache HTTPD log from server B.

xxx.xxx.xxx.xxx - - [25/Aug/2011:04:23:47 +0200] "GET /feed/ HTTP/1.1" 304 - "-" "Apple-PubSub/65.28"

This log from Apache HTTPD server A has a size of 102 bytes.

xxx.xxx.xxx.xxx - - [25/Aug/2011:04:15:08 +0200] "GET /wp-content/themes/mystique/css/style-green.css?ver=3.0.7 HTTP/1.1" 200 1326 "http://eromang.zataz.com/" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.20) Gecko/20110803 Firefox/3.6.20 ( .NET CLR 3.5.30729)"

This log from Apache HTTPD server B has a size of 274 bytes.

Also, depending the Log or Event Management infrastructure product, you need to consider event generated by intrinsically mechanism. For example, in order to search in your events most of products are creating indexes, these indexes are representing an average of twice the time of the size of the event. Also another intrinsically mechanism is that these products are also monitoring themselves, regularly executing tasks, do some statistics for dashboards or reports.

I have develop a bash script how will permit you to analyze all your archived logs and gather the following informations:

  • For each archived files, the total number of events, the total uncompressed size of the events, the Normal Event log size.
  • The total events for all archived files.
  • The total uncompressed size of all events in all archived files.
  • The grant total Normal Event log size.
  • The average event number per archived files.
  • The average bytes per archived file.

You can download this script by clicking on this link. A reminder, the provided Normal Events per second value, is not your real EPS rate, just check my previous blogpost regarding on “Why and howto calculate your Events Per Second“.

Les solutions Cloud ne sont pas “compliant” avec PCI

Malgré toutes les bonnes volontés, il ne se passe pas un jour sans que l’on retrouve des données bancaires sur Internet, soit par le biais d’un piratage, soit tous simplement par le biais d’un “Google Hacking“. Ces données bancaires dérobées et exploitées par des pirates informatiques mettent à mal le commerce électronique et la confiance des internautes dans les paiements en ligne.

Vous connaissez sûrement tous le standard PCI qui promeut la protection des données de paiement tels que vos données personnelles, vos identifiants et surtout votre numéro de carte bancaire, sa date d’expiration et son cryptogramme visuel. Le forum PCI Security Standards Council a élaboré desstandards de sécurité qui nécessitent des pré-requis pour la gestion quotidienne de la sécurité, autant en terme technique, qu’en terme de règles de sécurité et de processus.

De plus en plus d’entreprises, ont alors décidées de rejoindre le forum PCI et de se faire certifier compliant PCI-DSS, afin de redorer leurs images et des redonner de la confiance aux internautes envers les paiements en ligne.

Malheureusement, ces mêmes entreprises, dans le même temps, toujours en recherche d’économie d’échelle, se sont montrées très intéressées par les solutions Cloud, tels que les SaaS (Cloud Application), PaaS (Cloud Software Environment) et IaaS (Cloud Infrastructure). De nombreux chercheurs en sécurité informatique, ainsi que de nombreux professionnels de la sécurité doutaient déjà à l’époque de la possibilité d’affirmer qu’une solution Cloud soit complètement sécurisé et compliant avec de nombreux standards.

Amazon avec ses solutions Cloud EC2 et S3 (IaaS), a malheureusement confirmé, en toute honnêteté, qu’il serait impossible pour une entreprise utilisant ses solutions d’obtenir le niveau 1 du standard PCI-DSS si les données bancaires seraient stockées sur ses solutions.

De nombreux autres fournisseurs de solutions Cloud vont malheureusement devoir s’aligner sur les paroles de Amazon afin que les entreprises intéressées par leurs solutions connaissent les risques encourus en terme de sécurité et de normalisations.