AWS Privacy Industry Specialist - Devices
What are the key security requirements?
Authentication and password management. Authorization and role management. Audit logging and analysis. Network and data security. Code integrity and validation testing. Cryptography and key management. Data validation and sanitization.
What is the best approach to security testing in agile?
Automation: Automated tests should be performed in regular intervals so that they become a part of the standard testing process. On an IOT platform, automated scans should be scheduled at various frequencies such as daily, weekly, monthly, etc. In addition, all commits made by the development team should go through security experts to ensure the application developed is secure.
Describe your approach to conduct a privacy impact assessment of the Amazon Alexa Product.
Make sure to truly understand the technology before starting the assesment Provide an assesment framework that meets the requirements of GDPR Develop a taxonomy of data protection risks apply risk management expertise to help you balance
Alexa Privacy Controls - Text-to-Speech (TTS)
Once the necessary information is gathered, your request uses text-to-speech (TTS) technologies to generate an audio file that becomes Alexa's response. The text from Alexa's response is stored so that you can review past answers at Alexa Privacy Settings or in the Alexa app. These responses may also be reviewed by Amazon to ensure the device is providing the most relevant answers to you, and to make sure the TTS system is translating text to speech to the best of its ability.
PbD Questions about End-to-end security - lifecycle protection
Personal data must be kept secure.
What are the key security factors to be considered when deploying IoT devices? (2)
Privacy and Security. Interoperability Standards. Hosting. Data Connectivity. Incorrect Data Capture. Scalability.
How does privacy by design and privacy engineering operate together?
Privacy engineering brings tools, techniques, metrics, and taxonomy to implement 'Privacy by Design'. By building privacy protections at the core design, privacy engineering aims to reduce privacy risks and to protect privacy at scale.
What are the various components of privacy engineering?
Privacy engineering involves aspects such as process management, security, ontology and software engineering. The actual application of these derives from necessary legal compliances, privacy policies and 'manifestos' such as Privacy-by-Design.
What is privacy Engineering Cyber Security?
Privacy engineering is a methodological framework of integrating privacy in the life cycle of IT system design and development. It operationalizes the Privacy by Design (PbD) framework by bringing together methods, tools and metrics, so that we can have privacy protecting systems.
PbD Questions about Visibility and transparency
Privacy information should be concise, transparent, intelligible, and in an easily accessible form which uses clear and plain language. Take a user-centric approach to user privacy.
What is Pseudonymization?
Pseudonymization is a process that allows an organization to switch the original set of data (for example, data subject's e-mail) with an alias or a pseudonym. A particular pseudonym for each replaced data value makes the data record unidentifiable while remaining suitable for data processing and data analysis. It should, however, be noted that this also makes it possible for the organization to perform a reverse process - the re-identification of the data. This is why pseudonymized data are always in the scope of the GDPR.
How would you secure an IoT device deployment? (1)
Security in IoT is the act of securing Internet devices and the networks they're connected to from threats and breaches by protecting, identifying, and monitoring risks all while helping fix vulnerabilities from a range of devices that can pose security risks to your business.
A non-invasive attack means that the attacker must come close enough to target the chip and sense electrical characteristics. By targeting these electrical impulses, a hacker can
change the behavior of the devices and access sensitive information.
An invasive attack requires that the
chip surface be exposed. The hacker can then physically manipulate the chip itself, altering its characteristics.
In asymmetric cryptography-based authentication, there's a private key as well as a public key. The device to be authenticated is the only
entity that knows the private key, while the public key can be shared to any entity that intends to authenticate the device. As with the previous method, the function used to compute the signature should have certain mathematical properties; in this case, RSA and ECDSA are commonly used functions.
PbD Questions about anonymization
1) Can we anonymize and aggregate the data (so there is no chance that data subjects can be re-identified)? 2) Can we use one-way hashing instead of raw data?
PbD Questions about Pseudonymisation
1) Can we pseudonymize the data (so that data subjects cannot be re-identified unless that data is combined with additional information)?
There are two main categories of physical security attacks:
non-invasive and invasive.
How would you introduce security reviews and requirements in an Agile environment?
1) At the daily stand-up meeting, where the state of stories is reviewed, the team should be listening for any issues raised that may affect security and privacy. If there are stories of security importance, then progress on those stories needs to be monitored. 2) During development, having someone with a strong security background available to pair on security-sensitive code can be worthwhile, especially on teams that follow pair programming as a general practice. 3) If your team does team code reviews (and it should) through pull requests or collaborative review platforms, then having a security person reviewing code changes can help identify areas of the code base that need careful attention and additional testing. 4) At the beginning of each iteration, at the kick-off or planning meeting, most teams walk through all the potential stories for the iteration together. Somebody representing security should be present to ensure that security requirements are understood and applicable to each story. This helps ensure that the team owns and understands the security implications of each story. 5) At the end of each iteration, there are times when security should also be involved in the reviews and retrospective meetings to help understand what the team has done and any challenges that it faced. All of these points provide opportunities for security to engage with the development team, to help each other and learn from each other, and to build valuable personal connections. Also, Making the barrier to interacting with the security team as low as possible is key to ensuring that security does not get in the way of delivery. Security needs to provide quick and informal guidance, and answers to questions through instant messaging or chat platforms, email, and wherever possible in person so that security is not seen as a blocker. If those responsible for security are not reachable in an easy and timely manner, then it is almost certain that security considerations will be relegated to the sidelines as development charges ahead.
security constraints for a standard web application:
1) Bind variables in SQL statements to prevent SQL injection 2) Verify integrity of client-supplied read-only data to prevent parameter manipulation 3) Escape untrusted data in HTML, HTML attributes, Cascading Style Sheets and JavaScript to prevent Cross Site Scripting (XSS) 4) Avoid DOM-based XSS in client-side JavaScript 5) Use safe arithmetic to avoid integer overflow 6) Disallow external redirects to prevent open redirects 7) Authorize protected pages to prevent privilege escalation 8) Use anti cross site request forgery (CSRF) tokens Validate input 9) Use regular expressions that are not vulnerable to Denial of Service 10) Implement transactional authentication for high-value transactions 11) Do not hard code passwords
PbD Questions about Collection ('must-have, or nice-to-have'?)
1) Can we achieve our goals without processing personal data at all? 2) Can we take steps to minimize the identifiability or linkability of data sets? 3) Is special category/sensitive data necessary and justified (eg medical information for a regulated health app)?
Ensure Proactive Controls for Agile Development Teams
1) Defining security requirements - Agile methods should be built on industry standards, past vulnerabilities, and applicable laws to ensure the product satisfies all security properties. 2) Utilizing security libraries and frameworks - These libraries and frameworks have embedded security measures to protect code against design and implementation flaws that lead to security flaws. 3) Keeping data stores secure - Teams should ensure both NoSQL and relational databases are kept safe using secure queries, authentication, configuration, and communication. 4) Encoding and escaping data - Encoding and escaping data are used to prevent injection vulnerabilities. Escaping involves using special characters to avoid misinterpretation of strings, while Encoding refers to the translation of characters into equal values with different formats, making them inaccessible to malicious interpreters. 5) Input Validation, 6) Identity and Access Management (IAM), 7) Error & Exception Handling, and 8) Security Logging & Monitoring.
Advanced measures for IoT physical security
1) Deploy only authenticated devices. 2) Secure the device in a tamper-resistant case. 3) Enable only authenticated access to the secure devices. 4) Disable the device upon tampering. 5) Prevent probing of conductors. 6) Prevent access to any hardware components.
PbD Questions about Cookies
1) Do we have a cookie banner and cookie notice/policy in place?
PbD Questions about Privacy policy changes
1) Do we have a privacy policy/notice in place that clearly provides all of the required information? 2) Do we update it regularly, or when we do something new? 3) Do we have a process for disclosing and explaining significant changes?
PbD Questions about Updates, patches and vulnerability testing
1) Do we have anti-virus/anti-malware programs in place? 2) Do we have processes in place for penetration testing of company infrastructure at regular intervals? 3) Do we have appropriate updating and patching procedures in place, including verifying patch sources and packet integrity? 4) Have we ensured that our devices and software are subject to security development lifecycle testing (including regression testing and threat modeling)?
PbD Questions about Wireless networks and firewalls
1) Do we have appropriate controls in place for wireless networks, including ring-fencing different networks, and access logs? 2) Do we have firewalls for external or separate internal networks? 3) Do we have processes to block higher-risk websites/platforms which might pose a risk to personal data (eg file-sharing sites, personal email)? 4) Have we ensured processes are in place for flagging, quarantining or deleting suspicious email?
PbD Questions about Purpose and functionality evaluation
1) Do we have clearly defined, limited, relevant purpose(s) that we want to collect and use personal data for? 2) Do we tell individuals what these purposes are?
PbD Questions about Retention times
1) Do we need to retain the personal data for as long as planned? 2) Can we delete or archive or aggregate it and, if so, what is the earliest stage we can do that? 3) Can the retention and deletion process be automated to any degree?
PbD Questions about Putting the individual first
1) Do we set default profile or account settings in a way that is most friendly to the user? For example, where users can share profiles or content, do we start by automatically making accounts private instead of public by default? 2) Do we offer genuine, effective controls and options to individuals relating to the data we will collect and process, rather than providing an illusory choice?
PbD Questions about Data Protection Impact Assessments (DPIAs)
1) Have we considered in advance whether any planned use of data involves technology in ways which are new, innovative, or which give rise to processing or events that might be unexpected, intrusive or could present higher risks of harm to individuals? 2) Where appropriate, have we conducted a DPIA (noting that in certain instances doing so is mandatory)? 3) Keep a record of DPIA decisions.
PbD Questions about Opt-in/ppt-out
1) Have we created controls for granular data sharing user preferences (eg opt-in/opt-out), detailing the benefits or consequences of doing so in a clear and objective manner, including any potential impact to product features or functionality?
PbD Questions about Data erasure and destruction
1) Have we designed a process that enforces secure data erasure and/or destruction? 2) Do we have appropriate deletion methods in place for each category of personal data (eg overwriting, degaussing, shredding encryption keys, physical destruction etc)?
PbD Questions about Data protection policy
1) Have we implemented a clear data protection policy document, setting out our organization's ethos and overall approach to data protection and privacy?
PbD Questions about Security and privacy risk assessments
1) Have we implemented protocols assessing and securing guarantees from our data processors as to the sufficiency of the technical and organizational safeguards they apply when processing personal data on our behalf?
PbD Questions about Data back-up and recovery
1) Have we made sure we have appropriate data back-up and recovery systems in place (for example, if there is a data breach or a natural disaster)? 2) Do we follow a business continuity plan, and test it regularly?
What are the key security factors to be considered when deploying IoT devices?
1) Privacy and Security. 2) Interoperability Standards. 3) Hosting. 4) Data Connectivity. 5) Incorrect Data Capture. 6)Scalability.
There subcategories of attacks within the invasive and non-invasive categories:
1) Side channel analysis (non-invasive) 2) Tamper attack (invasive) 3) Fault injection attack (invasive or non-invasive) 4) Power/clock/reset 5) Optical, electromagnetic fault injection (invasive or non-invasive) 6) Frequency/voltage (invasive)
5 Ways to Prevent a Physical Breach from Compromising Network Security
1) Use certificate-based authentication on all IoT devices on your network. This would prevent an unauthorized endpoint from establishing a connection with your IoT device. 2) Encrypt device data. Robust IoT devices should have methods to manage authentication and encryption around the IoT device data and functionality over time. Always encrypt data before sending it over untrusted networks, especially if a secure connection isn't possible. Data decryption should only be done with a trusted application or backend server. 3) Minimize the connections between your IoT network and the enterprise network. Use a dedicated network infrastructure for IoT devices to minimize the potential attack surface. 4) Use unique and random credentials for each device. This will prevent widespread attacks that use and replicate the same credentials over multiple devices. Use unique, asymmetric digital certificates to identify devices, with an underlying private key embedded in the device's hardware. 5) Employ network scanning technology. This is critical to understanding what is on the network. It also allows you to take immediate action if you don't recognize a device or its purpose.
PbD Questions about Full functionality - positive-sum, not zero-sum
1) Users should have full functionality regardless of their privacy settings, except where it is not feasible to provide the service without their data (eg map apps requiring location data, or an online shop providing fit recommendations requiring user clothing size data). 2) Have we ensured that features don't require non-necessary personal data in order to access or use them?
PbD Questions about Privacy by default
1) You must only process personal data necessary to achieve your specific purpose. In some cases when dealing with children's data, you may need to set maximum privacy by default.
PbD Questions about Encryption
1.Have we ensured processes are in place for encrypting data where appropriate? For example: hard drives and solid state drives on laptops and desktops, any web to user traffic, any websites (from the device to the backend service), and bluetooth connections transmitting sensitive information.
PbD Questions about Training
2) Do we cover data protection by design and default in staff training, so individuals can understand and engage with any issues proactively, systematically, and innovatively?
How would you secure a Smart Home IoT device deployment? (3)
2. Change the name and password of the router. 3. Don't use default settings. Routers are often named after the manufacturer or the network that you're using — that gives hackers a vital clue to how to get access. It's also a good idea to avoid using your own name or address: these are useful clues for hackers trying to get into your network.
How would you secure a Smart Home IoT device deployment? (4)
4. Use strong passwords that are random passwords containing a mix of letters, characters, and symbols.
How would you secure a Smart Home IoT device deployment? (5-6 )
5. Avoid using public Wi-Fi when you're accessing your IoT network through your laptop or smartphone. 6. Use a Virtual Private Network (VPN) like Kaspersky's VPN Secure Connection. A VPN gives you a private, encrypted gateway to the internet and stops eavesdroppers from being able to intercept your communications.
How would you secure a Smart Home IoT device deployment? (7)
7. Start using guest networks. It's a great idea to use a guest network for visitors who want to use your Wi-Fi at home; it doesn't give them access to the main network or your email and other accounts. You can also use a guest network for your IoT devices. That means even if a hacker compromises one of your devices, they will be stuck in the guest network — they won't be able to control your primary internet access.
How would you secure a Smart Home IoT device deployment? (8)
8. Use a strong encryption method like WPA for Wi-Fi access.
How would you secure a Smart Home IoT device deployment? (9)
9. Take special care to secure the top-level control of your IoT network. It's not a bad idea to use two-factor authentication, using biometrics, a pass card, or a dongle to ensure that a hacker won't be able to produce both proofs of identity required.
Alexa Privacy Controls - Natural Language Understanding
After ASR comes Natural Language Understanding (NLU). NLU assigns meaning to the transcription producing an "intent" which is an instruction to the Alexa system to tell it what to respond to. In our example, the intent is identified as "weather" and sent to the designated data source where information is pulled for location (Seattle) and time (today). Intents may also be used to improve the NLU process.
Alexa Privacy Controls - Automatic speech recognition (ASR)
After wake word verification, your voice request undergoes Automatic Speech Recognition (ASR) which transcribes audio into text. Using our example request ("Alexa, what's the weather in Seattle?"), transcriptions could include one or several of the following: "what is the weather in seattle" "watt is the weather in seattle" "what is the whether in seattle" Alexa then chooses to act on the transcription that is most likely to be correct and stores it in Amazon's secure cloud. You can view and delete it at any time at Alexa Privacy Settings or in the Alexa app (Settings > Alexa Privacy > Review Voice History). Voice recordings and transcriptions may be used to improve the ASR process. Learn how data helps Alexa improve.
What is the difference between traditional security and agile security?
Agile and traditional security use the same security activities. An agile application security approach doesn't change the touchpoints, because changing the speed of development doesn't change the types of security bugs or flaws you introduce. Continue using the security fundamentals your business is accustomed to: Application security training for everyone involved with application building projects Security requirements during requirements gathering Architecture risk analysis during design Static application security testing or secure source code review during development Dynamic application security testing or penetration testing before production
What is agile development?
Agile development helps software firms adopt lightweight, iterative development cycles. The development methodology emphasizes small, manageable teams with cross-functional collaboration for frequent updates and releases.
What Privacy controls are in Amazon Alexa and Amazon Echo?
Amazon designs Alexa and Echo devices with multiple layers of privacy and security—from built-in protections to controls and features you can see, hear, and touch. On the device, Microphone off buttons, camera shutters, and light indicators. In app & online, Review your Alexa voice history and decide what you want saved and how it will be used. Ask Alexa, Ask Alexa to delete voice recordings, adjust settings, or explain what was heard. Wake word detection - When you have a request for Alexa, you first need to say your chosen wake word, which by default is "Alexa." Only after your Echo device detects the wake word is Alexa listening to your requests. But how does that actually work? When it comes to privacy, there should be no surprises. You'll always be able to tell when Alexa is listening to your request because a light indicator will appear on your Echo device or an audible tone will sound. Think of the "On the Air" signs that light up in television studios during a broadcast. These indicators notify you that your device has detected the wake word and Alexa is now processing your request. Still want to know exactly what Alexa heard? An easy way to see for yourself is to check out your voice history in the Alexa app (Settings > Alexa Privacy > Review Voice History) or at Alexa Privacy Settings.
What is Anonymization?
Anonymization is a technique used to irreversibly alter data so that the data subject to whom the data is related to can no longer be identified. Anonymized data are not in the scope of the GDPR. Whatever control or set of controls is used to mitigate privacy risks, be it traditional or the above described more novel ones, or even a combination of both groups, it is important to understand that there is always a residual risk.
How would you add physical security to an IoT device in a way that would notify you if the physical security were compromised? (2)
Another solution is to use Digital Rights Management for anti-tamper security on top of already existing copy protection mechanisms.This would provide multi-layered approach in which the original DRM software protects the software from unauthorized copying, modification or use, while preventing any attempt to remove or alter said protection.
is a security tool that enables developers to self-test their code before committing it for compile, far before launch. Checkmarx actively helps in discovering security flaws on an SAST basis, and the solution also offers recommendations for fixing bugs and for complying with development best practices.
Because SAST helps development teams catch security vulnerabilities early on in the development lifecycle, developers, project managers and application security professionals can more easily make adjustments along the way, thus helping to safeguard agility.
PbD Questions about Purpose limitation
Can we ensure we only use the data we need for the purposes we have identified?
Ensure Communication Among Security Team Members
Communication is the heart of teamwork. If you don't facilitate communication between different teams involved with your application's security testing process, there's a good chance that individual members will get carried away with focusing only on their own concerns. Keep an open line of communication across management, the AppSec team, and the software developers, so that everyone is aligned regarding priorities, expectations and goals. An automated solution such as Checkmarx helps streamline communication among different members of the development team, as the entire suite can be integrated into the tools used by the developers, such as IDEs, bug tracking tools, build servers, source code repositories and reporting systems. Its dashboards and viewers provide your team with a centralized means to communicate and manage security alerts and flaws, as well as track enhancements and changes that your software undergoes throughout its lifecycle.
PbD Questions about Children's data
Comply with applicable on all systems, including the Age Appropriate Design Code in the UK, and the Children's Online Privacy Protection Act (COPPA) in the US.
What is DHCP?
DHCP is an acronym for Dynamic Host Configuration Protocol. It is a network management protocol that's used by servers to automatically assign IP addresses to the computers and devices connected to them.
One of these side channel attacks is differential power analysis (DPA). This technique monitors the tiny amounts of energy dissipated in signal lines to determine the bits being transmitted, which has been used to determine the encryption keys that are used in the system. Another of these side channel attacks is to monitor the leakage current that can also lead to the data, while the electromagnetic emission can also potentially provide information about the data being sent.
DPA countermeasures consist of a broad range of software, hardware, and protocol techniques that protect tamper-resistant devices from side-channel attacks. These include reducing the information leaked into the side-channel to decrease signal-to-noise (S/N) ratios. Designers can also add amplitude or temporal noise into the side-channel to decrease that S/N ratio.
PbD Questions about Respect for user privacy
Data portability - Can we export personal in a commonly used, machine readable format? Right to be informed - Do we fullfil individuals' rights to be informed about the data we hold about them? Right of access - Do our systems facilitate individuals' right to request access to data the company holds about them? Right to rectification - Do our systems facilitate individuals' right to correct the data we hold about them? Right to erasure - Do our systems facilitate individuals' right to delete the data we hold about them? Right to restrict processing - Are we able to freeze/quarantine data we hold about an individual? Right to data portability- Can we provide individuals with their data in a commonly used and machine readable format? Can we transmit that information to another organization if required to? Right to object - Do we have procedures in place to enable data subjects to object to how we're using their information, particularly in relation to any direct marketing or higher risk uses? Exemptions - Remember that not every right will be applicable in all situations; it will depend on the type of data being processed, and the legal basis for the processing.
PbD Questions about Privacy embedded into design
Data protection considerations should be embedded into business practices as an essential component, not as an afterthought.
PbD Questions about Authentication and access control
Do we have appropriate user access controls in place, including appropriate logical access controls, and procedures for deleting old user IDs?
PbD Questions about Hardware
Do we have protections in place for all systems to prevent personal data being copied to removable media (CD/DVDs, external hard disks, USB memory sticks etc)?
PbD Questions about Remote working
Do we have protocols for remote access control including the use of two-factor authentication, one-time passwords and/or virtual private networks?
Alexa Privacy Controls - keyword spotting
Echo devices use built-in technology called "keyword spotting" that matches spoken audio to the acoustic patterns of the wake word. Simply put, Echo devices are designed by default to detect only the sound waves of your chosen wake word, and everything else is ignored. Like water through a strainer, all other audio (people talking, faucets running, birds chirping) passes through the device until the wake word is "caught" and sent to Amazon's secure cloud, where meaning is assigned to your request. You can choose to opt-in to features like Alexa Guard that allow your device to detect more than your chosen wake word—for example, the sound of smoke alarms—but you would need to update your settings to do so.
PbD Questions about Secondary Uses of data
If delivering a product/service requires the data to be identifiable, can any secondary uses (eg analytics, R&D, reporting etc) use aggregated or pseudonymised data?
How would you secure a Smart Home IoT device deployment? (2)
First off, lock the front door - that is, secure the router. If a hacker gets control of the router, they control the network, which means they can control any device in your house, from the door locks to your computer.
What does privacy engineer do?
Guide the development of new privacy products and features. Identify areas of improvement in local practices relative to managing data privacy. Performs regular privacy assessments of operational processes, identifying, and mitigating risks across the company through effective tools, training and guidance.
PbD Questions about Certification and existing evidence
Have we considered obtaining a security certification (like ISO/IEC 27001), if appropriate?
PbD Questions about Incident response plan
Have we created an incident response plan during the process of designing a new product/service, and considered what security measures may be needed in case of an incident (for example, an access breach, a virus, or physical server damage)?
PbD Questions about Privacy settings and preferences
Have we created controls and/or documentation enabling individuals to review and revise their privacy settings and preferences? For example, an audit tool for users so that they can determine how their data is stored, protected and used, and decide if their rights are being adequately protected.
PbD Questions about Opt-in/opt-out
Have we created controls for opt-in and opt-out of sharing data by the user, detailing the benefits or consequences of doing so in a clear and objective manner, including any potential impact to product features or functionality?
PbD Questions about Data minimization
Have we minimized the personal data we collect to only what we need for our purposes?
How would you add physical security to an IoT device in a way that would notify you if the physical security were compromised?
I would choose an IOT product that inserts anti-tamper protection into the firmware of the device itself that would cause parts of the code to continually check each other for integrity. If any tamper attempt is detected, the product would be designed to either attempt to restore the code to its original form, stop the firmware from running entirely, and send a notification to the developer or any combination of the three.
Employ Foundational Security Practices
In essence, application security testing should become an integral and essential part of your development process. You can make sure that every member of your development team becomes responsible for the security of the products you create, which contributes to the overall reliability of your product. Developers are, by nature, focused on functionality, whilst AppSec professionals will focus on security. Internalizing security into your organization's culture will require that developers are well aware of secure coding techniques and best practices. Aside from just focusing on features, they will need to be aware of potential attack vectors, common security flaws in programming languages, and bad coding habits to avoid. Knowing what to do, and what not to do, helps a lot in plugging security gaps from the get go. Also, adopt security testing standards and metrics for your applications. For instance, making sure that applications run with underprivileged accounts can prevent disasters even when security holes are discovered in your application. A final foundational measure is to periodically review the architecture of your software from a security perspective. Stay focused on how systems interact, and watch out for how outside actors can exploit APIs and other factors that are beyond your system's boundaries. By having distinct measures of what secure application standards are, you are more likely to find and fix potential security flaws before your app goes into production. You can also check your coding style and structure against security standards defined by prominent institutes such as SANS and OWASP, as a matter of best practice.
What is the difference between privacy by design and privacy engineering?
In the SDLC, privacy by design precedes privacy engineering. Privacy by design translates privacy requirements into an implementation plan. Privacy engineering, on the other hand, is the actual implementation, operation and maintenance.
What devices are IoT?
IoT devices include wireless sensors, software, actuators, computer devices and more. They are attached to a particular object that operates through the internet, enabling the transfer of data among objects or people automatically without human intervention.
What is meant by k-means clustering?
K-means clustering is a type of unsupervised learning, which is used when you have unlabeled data (i.e., data without defined categories or groups). The goal of this algorithm is to find groups in the data, with the number of groups represented by the variable K
How does k-means clustering work?
K-means clustering uses "centroids", K different randomly-initiated points in the data, and assigns every data point to the nearest centroid. After every point has been assigned, the centroid is moved to the average of all of the points assigned to it.
Imagine that you are part of a cross-functional product development team that includes researchers, software engineers, user experience designers, product managers, as well as the team's government counterparts. Your team is building a completely new digital service for a federal agency. The agency requires that your team produce a Security Impact Analysis (SIA) whenever changes that affect your product's security posture are introduced and has given you freedom to design the process. Your task: As the security and compliance expert on your team, you have been asked to design a process the team can follow for completing a Security Impact Analysis (SIA) when required. This process should give your team members the information they need to determine when and how to complete an SIA.
Review the paragraph below, excerpted from section CM-4 in this document. It describes the requirements for an SIA. Control: Analyze changes to the system to determine potential security and privacy impacts prior to change implementation. Discussion: Organizational personnel with security or privacy responsibilities conduct impact analyses. Individuals conducting impact analyses possess the necessary skills and technical expertise to analyze the changes to systems as well as the security or privacy ramifications. Impact analyses include reviewing security and privacy plans, policies, and procedures to understand control requirements; reviewing system design documentation and operational procedures to understand control implementation and how specific system changes might affect the controls; reviewing the impact of changes on organizational supply chain partners with stakeholders; and determining how potential changes to a system create new risks to the privacy of individuals and the ability of implemented controls to mitigate those risks. Impact analyses also include risk assessments to understand the impact of the changes and determine if additional controls are required. Goal Design and document an SIA process for your team that addresses the CM-4 control. Audience Your cross-functional product development team --- including product, research, ux and engineering contributors --- is your primary audience.
Perform Threat Modeling on New Features
Some security vulnerabilities result from coding bugs, such as using a function (sprintf is a good example) that is prone to buffer overflow attacks. Finding and fixing coding bugs like these, however, is the least of your problems. The more challenging part of application security testing is finding vulnerabilities that stem from design flaws, such as bad sanitation techniques and weak encryption, which can turn into major headaches if you find out about them too late into the development cycle, when dozens or even hundreds of modules will break if you fix the bug. In order to avoid such unpleasant surprises, every new feature introduced into the application should be examined from a security perspective, in order to determine if it can be exploited for malicious purposes. Threat modeling involves examining your application and its code from a hacker's perspective and looking for ways it can be compromised. Anticipating and planning for security implications in advance can spare you from a lot of trouble that comes with design flaws that lead to security loopholes. This way, you can identify and mitigate the threat before other layers of code settle on top of it and make it more challenging to fix.
Use Static Application Security Testing Tools
Static Application Security Testing (SAST) tools are a software development team's best friend. As opposed to dynamic testing tools (DAST), which only work on compiled and executable binaries, SAST scans at the source code level, which makes it easier for individual members of a development team to apply. Since developers engage in application security testing in the early stages of the development process, SAST helps cut the costs and rippling effects that are attributed to the late discovery and correction of security risks. Moreover, for the most part, SAST application testing can be automated and transparently integrated into the development process, thus minimizing the extra effort that usually goes into assessing applications for security.
Antitamper and hardware protection mechanisms
Tamper resistance means using specialized components to protect against tampering with a given device. One of the common and effective ways of implementing antitampering, which can strengthen the hardware security of a device, is by adding tamper detection switches, sensors, or circuitry in the device, which can detect certain actions such as the opening of the device or its forceful breakage, and would act based on that by deleting the flash memory, or making the device unusable. Along with this, it is highly recommended to protect sensitive chips and components by removing their labels, hiding them with epoxy, and even encapsulating the chips in a secure enclosure.
Where does the os save the cache?
The data in a cache is generally stored in fast access hardware such as RAM (Random-access memory) and may also be used in correlation with a software component. A cache's primary purpose is to increase data retrieval performance by reducing the need to access the underlying slower storage layer.
The Agile Security Manifesto
The goal of the Agile Security Manifesto is to guide you as you develop of new activities and adjust existing activities to make the switch to agile security. The four principles it describes are meant to inspire you to build secure software in an agile way: Rely on developers and testers more than security specialists. Secure while you work, not just after you're done. Implement features securely instead of adding on security features. Mitigate risks rather than fixing bugs.
How are agile security and traditional security different?
The implementation of security steps differs between traditional and agile security. Agile processes aren't special snowflakes. They just make process inefficiencies more obvious than their waterfall counterparts. Here are a few examples of how: Releasing every 14 days means a security assessment can't take 5 days each time. Stopping or slowing down development for days or weeks may actually pose more risk to your business than getting code through without detailed analysis. Operating outside of the build cycle isn't practical, since we are always in the build cycle with agile development.
Fit Security Testing Into Your Development Lifecycle
The key concept here is integration. AppSec and development professionals do not have to necessarily conflict with their responsibilities, and security testing should not be a separate phase of your development process. Rather, you should incorporate security in parallel to all other development disciplines, including requirement analysis, software design and construction. Distributing application security testing into smaller tasks can make the job easier, faster and cheaper. In addition, you will need to determine the priority of each task. By classifying them based on their nature and criticality, you'll be able to schedule bundles of tasks for each iteration and make sure all of them get covered throughout the software development lifecycle. For instance, making sure that user input is escaped to prevent SQL injection and Cross Site Scripting (XSS) attacks is a crucial task that needs to be performed at each iteration. Other tasks, such as user account control (UAC) testing are not as critical (albeit important) and can be scheduled to be carried out at every two or three iterations.
Why do Agile Teams Fail to Secure Sensitive Data?
The most common mistake for Agile teams trying to secure sensitive information is improper data classification. Most teams fail to specify the data that needs special protection, making it difficult to enforce an effective data classification policy. Additionally, software development teams fail to optimally encrypt sensitive data in transitor at rest, making it easily accessible to threat actors. Another point of concern occurs when teams misuse cloud storage platforms, storing credential and configuration data without correctly interpreting the provider's security policies. This makes it challenging to implement a comprehensive strategy from a security perspective.
Alexa Privacy Controls - The wake word
Think of the wake word as a verbal cue that makes things happen. In the classic tale from The Arabian Nights, an invisible door into a mountain can only be opened by saying the phrase "Open Sesame." Use the phrase, the door opens. Don't use the phrase, the door stays shut. When you have a request for Alexa, you first need to say your chosen wake word, which by default is "Alexa." Only after your Echo device detects the wake word is Alexa listening to your requests. When it comes to privacy, there should be no surprises. You'll always be able to tell when Alexa is listening to your request because a light indicator will appear on your Echo device or an audible tone will sound. Think of the "On the Air" signs that light up in television studios during a broadcast. These indicators notify you that your device has detected the wake word and Alexa is now processing your request.
Periodic vendor evaluations does not provide timely insight of a vendor's performance? Why?
Vendor management procedures ensure that the vendor's privacy practices and controls are noted during the vendor evaluation process. However, if specific service level agreements (SLAs) designed to monitor a vendor's performance in relation to services that can affect privacy are not implemented, it will be difficult for the enterprise to provide proper monitoring related to privacy. Establishing privacy-related SLAs is the best option because SLAs can be used to create specific metrics or indicators to ensure that the vendor's privacy protections are maintained over time. Including a list of the vendor's privacy responsibilities in the contract is very important; however, it does not provide indicators of the vendor's performance like an SLA does.
Alexa Privacy Controls - Cloud verification and encryption
When your Echo device detects the wake word, it sends your request to Amazon's secure cloud where audio is reanalyzed to verify the wake word was spoken. If this cloud software verification is unable to confirm the wake word was spoken, the Alexa system stops processing the audio. If the wake word is verified (or Alexa is activated using the action button), your request will be processed using several sophisticated algorithms that allow Alexa to respond appropriately. All of your interactions with Alexa are encrypted in transit throughout this process.
How Does Security in Agile Development Differ from Traditional Development Approaches?
While Agile and Traditional security relies on adopting similar security policy, the implementation of security features varies widely. In Agile methods, rapid release cycles imply frequent security test runs, which means that a detailed code analysis may ultimately slow down development and delivery. To tackle this, security risk indicators should be set to match the delivery rate, enabling Agile teams to undertake security reviews while dealing with frequent changes. Agile security practices should also be categorized into those completed at the beginning of development and those implemented during every sprint, depending on whether they need to be performed once or continuously.
Do routers save browsing history?
Yes, Wi-Fi routers do keep logs of your internet history. Therefore, unless you use a VPN(Virtual Private Network) a Wi-Fi owner can see your browsing history if they have the router's logging feature enabled. For instance, the Netgear Nighthawk router is able to store up to 256 entries in its activity log.
Does a router save the cache?
Yes. A router's cache is somewhat different from a computer or web browser's cache, so there will be no need for an IP address, DNS resolver cache, or related cache entries. A router's cache is dedicated to storing network information and instructions. If an error gets stored in this cache it leads to router malfunctions and dropped Internet connections, particularly if you have learned how to use a router as a WiFi extender. Clearing out the cache avoids this issue.
Alexa Privacy Controls - Data storage
You have control over your Alexa experience. We make it easy for you to update your privacy settings (try saying, "Alexa, update my privacy settings."), and you can always view and manage your Alexa interactions at Alexa Privacy Settings or in the Alexa app (Settings > Alexa Privacy). To review how long your voice recordings are currently saved and whether or not they're used to develop new features or manually reviewed to help improve Alexa, you can also ask, "Alexa, what are my privacy settings?"
A fault injection or perturbation attack can be both invasive and non-invasive. These are triggered when the hacker induces a faulty behavior in a system. This compromises security via
a fault or "glitch" that lets the hacker influence behavior and gain access. A clock, power supply, temperature control, or other environmental control is a common point of entry for perturbation attacks.
A non-invasive attack can only be carried out when
a hacker is within a small physical distance. Once they're that close they can scan a device's processor, learn its specs, alter its programming and copy any important information that's present.
Open internet ports are also
a major concern
Other techniques include adding randomness into the code to reduce the correlation between side-channels and the original data flow. Another way to protect against such attacks is to implement a physical unclonable function (PUF). This uses structures within a silicon device to generate a unique number that can also be used to protect against tampering. This is increasingly being used as a way to protect against reverse engineering, as there is no visible data to store that is vulnerable to tampering. The PUFs are defined as
functions based on physical characteristics which are unique for each chip, difficult to predict, easy to evaluate and reliable. These functions should also be individual and practically impossible to duplicate. This means that the PUFs can serve as a root of trust and can provide a key that cannot be easily reverse engineered. With this technique, the chip itself can check whether the environment is intact. During production or personalization, the IC measures its PUF environment and stores this unique measurement. From then on, the IC can repeat the measurement, usually during start up, and check if the environment has changed, which would indicate an alteration in the card body. This protects against many kinds of invasive attacks.
Some other techniques of implementing tamper resistance include
incorporating tight airflow channels, security screws and hardened steel enclosures-all of which would make tampering with the device extremely difficult.
Physical security or hardware security, involves securing physical (often silicon) elements of a system. Physical attacks require close proximity to compromise the IoT ecosystem. Some IoT deployments, such as connected lighting or medical devices, use AES encryption to deliver firmware updates. An authorized source that can prove the knowledge of AES secret keys is the only way to securely deliver the updates. A hacker would have to
steal these AES credentials and use the stolen keys to hijack the network, but they would need to be within the proximity of the device.
Hardware-based security has proven to be much more robust than its software counterpart. A secure microcontroller that executes software from an internal, immutable memory strongly protects against attacks that attempt to breach an electronic device's hardware. This software is considered to be
the "root of trust" because, stored in the microcontroller's ROM, it cannot be modified. This trusted software can be used to verify and authenticate the application's software signature. A hardware-based root-of-trust methodology starts from the bottom of the design, enabling you to close off more potential entry points into your design than a software-based approach would allow.
An invasive attack is when
the processor is physically manipulated in some way and involves the chip being visible and exposed.
Secure microcontrollers also support challenge-response authentication, which comes in two flavors. Symmetric cryptography-based authentication utilizes a shared secret key, or number, between the host and the device to be authenticated. A device is authenticated when digital signature computations triggered by a random key (the challenge) sent by the host to the device are a match between the
two sides. To ensure that results can't be imitated, a function with adequate mathematical properties—such as SHA-256 secure hash functions—is critical.