India’s introduction of its first comprehensive personal data law
Introduction to DPDP
On 11 August 2023, six years after the Supreme Court of India declared the right to privacy a fundamental right, India passed a ‘comprehensive personal data law.’ This is a major legal landmark for the country as it is its ‘first comprehensive law for the protection of digital personal data’. Among the G20 countries, India is the 19th to pass such a comprehensive law and August 11th marks the end of the five year process to do so. Notably, the Government has not announced the date that this will go into effect, and in a departure from the announcement of the General Data Protection Regulation (GDPR) in the E.U., this regulatory change will not have a mandated transitional period. Nonetheless, existing data privacy laws in India were both patchy and dated, so the new regulation is a significant improvement for India in the privacy and technology space.
India’s data protection regime will be added to by the Central Government and enforced by the Data Protection Board of India, a new adjudicatory body which will monitor compliance. As the Act provides principle-based obligations, the law will become clearer when the Board begins to apply the new regulation. Broadly speaking, the Act aims to balance business innovation with individual privacy rights as well as to increase accountability among data fiduciaries regarding processing data within India. The term data ‘fiduciary’ introduced by the DPDP Act here refers to ‘an entity which determines how personal data is processed.’
What is the DPDP?
The Digital Personal Data Protection Act of India borrows from the E.U.’s GDPR in both its definition of personal data and its coverage of all entities who process personal data. The DPDP is applied only to ‘personal data capable of identifying the data principal, which is either collected digitally or is digitised after it is collected non-digitally.’ ‘Data principal’ here differs from the GDPR’s term ‘data subject’ but denotes the same meaning- the natural person to whom the data belongs. Notably, consent remains the primary legal ground for processing personal data. The act does give wide governmental exemptions on grounds such as the security of the State alongside exemptions for publicly available data.
Importantly, the act does not state that processing of digital personal data within India is limited to Indian citizens. This means that fiduciaries which process data of those outside of India but are based within the country will also be affected by the act. However, the DPDP does not apply to data fiduciaries based outside of India which monitor data subjects or principals inside the country.
Whilst all data fiduciaries are subject to the DPDP, they have different responsibilities. The Act categorises data fiduciaries according to their size, and assigns them differing compliance responsibilities corresponding to this. Those organisations classed as ‘Significant Data Fiduciaries’ (SDF’s) have increased compliance responsibilities, including but not limited to the appointment of a resident Data Protection Officer (DPO). The classification of an SDF will be based upon the ‘volume and sensitivity of personal data collected.’ At the present time, however, critics say the measurement of what an SDF is uses criteria which ‘lack quantifiable thresholds.’ Critics have similarly been outspoken about the broad exemptions the government is permitted to grant data fiduciaries
Differences between the DPDP Act and the GDPR
The concepts behind the DPDP are broadly similar to that of the GDPR but there are a few notable exceptions. According to the DPDP Act, unlike the GDPR, there are no controls on processing sensitive personal data such as racial origin, religious beliefs or sexual orientation. It is also unclear whether the notice for consent under the DPDP will have a specific period of retention. In contrast to the GDPR, personal data made public does not fall under the scope of the DPDP Act. There are also far more stringent regulations regarding reporting data breaches, whereby all personal data breaches are ‘mandatorily reportable’ to the Board. The DPDP Act also does not contain the ‘right to be forgotten’ unlike the GDPR.
Additionally, the power it grants the government to demand information from data fiduciaries as well as block certain content has drawn widespread criticism and claims that it violates the right to privacy as well as the right to information. These exemptions to the central government within the DPDP Act have certainly raised concerns about the data privacy of E.U. citizens within India.
Although this seems to cast the new framework in a negative light, there may be advantages to it. One may be the increased simplicity of cross-border transactions – only black-listed countries specified by the Government will not be included in the provision. The GDPR uses a white-listing process which was discarded by India in one of the previous drafts of the DPDP. The initial draft caused alarm from various tech companies and was ruled too ‘cumbersome’ for a country which outsources as much as India.
Whilst the DPDP Act has a different approach to the E.U. in many respects and has drawn much criticism, India is taking steps in the right direction amid global tech and Artificial Intelligence advances.
New Chinese regulation of generative A.I.
On the 15th of August, guidelines on Generative A.I. Measures came into effect in China. The regulations were first released by the Cyberspace Administration of China (CAC) on the 11thApril 2023. These are the first A.I. regulatory measures taken in the country following a boom in Artificial Intelligence. Following a consultation period the Interim Measures for the Management of Generative Artificial Intelligence Services (which will hereafter be referred to as Interim Measures) were then published again in July, to later come into effect on 15th August. The Interim Measures were a collaboration between CAC and six other governmental organisations. Various aspects of the development and provision of Generative A.I. will be subject to these new measures.
Previously, China’s stringent regulation of A.I. prompted the concern that the country may fall behind in the technology space. However, CNN posits that perhaps the $1 billion fine issued to fintech giant Ant Group was actually the last stage of the previous regulatory crackdown. Following its apparent change in stance, China is now being called a ‘trailblazer’ in A.I. regulation, whilst critics say that other proposed A.I. regulations, such as the E.U. A.I. Act are ‘in limbo’. Notably, shortly after the regulations came into effect many of China’s leading internet names such as Alibaba and Baidu announced their own A.I. bots. The quick turnaround from these leading technological companies to introduce their new products does imply a strategic decision to clear regulatory guidelines first.
Under the provisions, generative A.I. are grouped into three key terms: Generative A.I. Technologies, Generative A.I. Services and Generative A.I. Service Providers. The prior listed Generative A.I. are all subject to various guidelines which reflect China’s national character, in the same way that the E.U. and U.S. guidelines do. The U.S. has previously taken a relatively ‘hands-off’ approach to regulation, which it continues to do, whilst Forbes suggests that the E.U. regulation is the gold standard for incoming regulatory changes.
Whilst there are differences between China’s regulatory position and those in the West, the regulation similarly encourages a balance between innovation and state control. That said, there is prior history of A.I. being disallowed by the country due to concerns within the regime about misinformation and social mobilisation. For instance, following the launch of ChatGPT, authorities expressed concerns that the large language model could be used to spread misinformation and access to it was later restricted.
These concerns can still be seen within the new regulation, with Helen Toner, a director at Georgetown’s Center for Security and Emerging Technology. She claims that whilst China may wish to support A.I. development, a key aim for the country is to ‘maintain the ability to censor and control the information environment.’ Within the generative A.I. Measures, generative A.I. service providers must, among other clauses, uphold core socialist values. In addition, generative A.I. service providers must undergo a safety assessment and record-filing formalities process under the Administrative Provisions on Algorithm Recommendation for Internet Information Services if they have a social mobilisation ability or have a public opinion attribute.
‘Finfluencer’ Guidance: FCA issues new guidance on financial promotions on social media
In July of 2023, the Financial Conduct Authority (FCA) issued a guidance consultation regarding financial promotions one social media. Following the rapid development of social media in recent years, this guidance will take over from the previously published guidance from 2015. The FCA will be consulting on this proposed guidance until the 11th of September, prior to this date firms and influencers involved in the promotion of financial products are welcome to submit responses.
The development of current financial guidance is being committed to following the FCA’s quarterly data that social media is increasingly being used to communicate financial promotions. In a statement made in July, the FCA said that this additional work was being done in order to tackle both ‘illegal and non-compliant financial promotions’. This clarified guidance is on trend with increasing regulation in the online financial spheres, with the new regulation on Consumer Duty being enacted from July 31st and new regulation on the promotion of cryptoassets in the UK coming into force from 8 October 2023. In this increased regulation of crypto firms, the FCA will ban incentivising investing in crypto to mitigate high-risk investments.
The rise of social media platforms has been accompanied by an increase in promotion of financial products on various media. The FCA has raised concerns about these poor quality financial promotions and the risks that they present to the consumer. In particular the risk created by these financial communications targeting younger audiences.
Some important points in the draft guidance include reminding firms of their Consumer Duty and reminding them of what a final promotion consists of. The FCA have clarified the regulations to various social media entities and representatives, stating that this is a media-neutral rule, and as such any form of communication through media can be considered a financial promotion if it contains ‘invitation or inducement to engage in investment activity.’ The key stakeholders which the FCA guidance is relevant to includes not only social media platforms but also ‘influencers and unauthorised persons communicating financial promotions on social media.’
Indeed, there is increasing concern about the rise of ‘finfluencers’ or influencers who promote financial products with little to no expertise. Additionally with the ascent of ‘buy now, pay later’ platforms such as Klarna, or business models such as crypto assets, the FCA wants to clarify the guidance and expectations it has of those organisations. The guidance introduced by the FCA provides support to those influencers who may not realise that their activities do fall within the scope of regulation.
Within the guidance the FCA also reminds firms that there are basic requirements for financial promotions on social media, including being ‘fair, clear and not misleading’. In addition to this, firms are required to attach risk warnings to these promotions. There is also a new focus on the prominence of such risk warnings on social media and the requirement for specific wording of risk warnings has become more ‘inflexible’. They key purpose behind this new guidance is simply to hold those organisations and influencers who promote financial products on social media to understand where they fall under the regulations.